An Open Source Machine Learning Framework for Everyone

Python PyPI

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.

TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Discuss for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development:

Fuzzing Status CII Best Practices Contributor Covenant

Continuous build status

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Nightly GCS Official GCS
Libtensorflow Linux CPU Status Nightly GCS Official GCS
Libtensorflow Linux GPU Status Nightly GCS Official GCS
Libtensorflow Windows CPU Status Nightly GCS Official GCS
Libtensorflow Windows GPU Status Nightly GCS Official GCS

Community Supported Builds

Build Type Status Artifacts
Linux AMD ROCm GPU Nightly Build Status Nightly
Linux AMD ROCm GPU Stable Release Build Status Release 1.15 / 2.x
Linux s390x Nightly Build Status Nightly
Linux s390x CPU Stable Release Build Status Release
Linux ppc64le CPU Nightly Build Status Nightly
Linux ppc64le CPU Stable Release Build Status Release 1.15 / 2.x
Linux ppc64le GPU Nightly Build Status Nightly
Linux ppc64le GPU Stable Release Build Status Release 1.15 / 2.x
Linux aarch64 CPU Nightly (Linaro) Build Status Nightly
Linux aarch64 CPU Stable Release (Linaro) Build Status Release 1.x & 2.x
Linux aarch64 CPU Nightly (OpenLab)
Python 3.6
Build Status Nightly
Linux aarch64 CPU Stable Release (OpenLab) Build Status Release 1.15 / 2.x
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Nightly Build Status Nightly
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Stable Release Build Status Release 1.15 / 2.x
Red Hat® Enterprise Linux® 7.6 CPU & GPU
Python 2.7, 3.6
Build Status 1.13.1 PyPI

Community Supported Containers

Container Type Status Artifacts
TensorFlow aarch64 Neoverse-N1 CPU Stable (Linaro)
Debian
Static Release 2.3

Resources

Learn more about the TensorFlow community and how to contribute.

License

Apache License 2.0

Comments
  • plot_model() got an unexpected keyword argument 'show_layer_activations'

    plot_model() got an unexpected keyword argument 'show_layer_activations'

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    No

    Source

    source

    Tensorflow Version

    2.6.4

    Custom Code

    No

    OS Platform and Distribution

    Linux

    Mobile device

    No response

    Python version

    3.7.12

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    plot_model not working according to the documentation mentioned. In Jupyer Notebook of Kaggle.
    

    Standalone code to reproduce the issue

    n=3
    from tensorflow.keras.models import Sequential
    from tensorflow.keras.layers import Dense,Dropout,Input
    from tensorflow.keras.callbacks import EarlyStopping
    
    e=EarlyStopping(patience=9,restore_best_weights=True,verbose=1)
    
    model=Sequential()
    
    #Input Layer
    model.add(Input(shape=(X_train.shape[1],)))
    
    #Hidden Layer
    for counter in range(1,n+1):
        model.add(Dense(n*X_train.shape[1],activation='relu'))
    #     if(counter%4==0):
    #         model.add(Dropout(0.75))
    
    #Output Layer
    model.add(Dense(1))
    
    model.compile(loss='mean_squared_error',
                  optimizer='adam',
                  metrics = ['mean_absolute_error',tf.keras.metrics.RootMeanSquaredError()])
    
    model. Summary()
    
    # from tensorflow.keras.utils import plot_model
    tf.keras.utils.plot_model(model, to_file='model.png',show_shapes=True,show_dtype=True,show_layer_activations=True,show_layer_names=True,rankdir='LR')
    

    Relevant log output

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    /tmp/ipykernel_23/976071416.py in <module>
          1 # from tensorflow.keras.utils import plot_model
    ----> 2 tf.keras.utils.plot_model(model, to_file='model.png',show_shapes=True,show_dtype=True,show_layer_activations=True,show_layer_names=True,rankdir='LR')
    
    TypeError: plot_model() got an unexpected keyword argument 'show_layer_activations'
    
  • BUILD:tensorflow/compiler/mlir/quantization/tensorflow/debugging/mlir_dump.cc:93:10: error: could not convert 'dump_file' from 'std::unique_ptr<llvm::raw_fd_ostream>' to 'absl::lts_20220623::StatusOr<std::unique_ptr<llvm::raw_fd_ostream> >'

    BUILD:tensorflow/compiler/mlir/quantization/tensorflow/debugging/mlir_dump.cc:93:10: error: could not convert 'dump_file' from 'std::unique_ptr' to 'absl::lts_20220623::StatusOr >'

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    source

    Tensorflow Version

    master

    Custom Code

    Yes

    OS Platform and Distribution

    Linux Ubuntu 18.04

    Mobile device

    No response

    Python version

    3.6

    Bazel version

    5.3.0

    GCC/Compiler version

    gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    configure:
    # ./configure
    You have bazel 5.3.0 installed.
    Please specify the location of python. [Default is /usr/local/bin/python3]: 
    
    
    Found possible Python library paths:
      /usr/lib/python3.6/dist-packages
      /usr/local/lib/python3.6/site-packages
    Please input the desired Python library path to use.  Default is [/usr/lib/python3.6/dist-packages]
    
    Do you wish to build TensorFlow with ROCm support? [y/N]: N
    No ROCm support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with CUDA support? [y/N]: N
    No CUDA support will be enabled for TensorFlow.
    
    Do you wish to download a fresh release of clang? (Experimental) [y/N]: N
    Clang will not be downloaded.
    
    Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: 
    
    
    Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
    Not configuring the WORKSPACE for Android builds.
    
    Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
    	--config=mkl         	# Build with MKL support.
    	--config=mkl_aarch64 	# Build with oneDNN and Compute Library for the Arm Architecture (ACL).
    	--config=monolithic  	# Config for mostly static monolithic build.
    	--config=numa        	# Build with NUMA support.
    	--config=dynamic_kernels	# (Experimental) Build kernels into separate shared objects.
    	--config=v1          	# Build with TensorFlow 1 API instead of TF 2 API.
    Preconfigured Bazel build configs to DISABLE default on features:
    	--config=nogcp       	# Disable GCP support.
    	--config=nonccl      	# Disable NVIDIA NCCL support.
    Configuration finished
    

    Standalone code to reproduce the issue

    Build success.
    

    Relevant log output

    # bazel build  //tensorflow/tools/pip_package:build_pip_package
    Starting local Bazel server and connecting to it...
    INFO: Options provided by the client:
      Inherited 'common' options: --isatty=1 --terminal_columns=237
    INFO: Reading rc options for 'build' from /home/tensorflow/.bazelrc:
      Inherited 'common' options: --experimental_repo_remote_exec
    INFO: Reading rc options for 'build' from /home/tensorflow/.bazelrc:
      'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
    INFO: Reading rc options for 'build' from /home/tensorflow/.tf_configure.bazelrc:
      'build' options: --action_env PYTHON_BIN_PATH=/usr/local/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3.6/dist-packages --python_path=/usr/local/bin/python3 --action_env PYTHONPATH=/usr/lib/python3.6/dist-packages
    INFO: Reading rc options for 'build' from /home/tensorflow/.bazelrc:
      'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
    INFO: Found applicable config definition build:short_logs in file /home/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
    INFO: Found applicable config definition build:v2 in file /home/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
    INFO: Found applicable config definition build:linux in file /home/tensorflow/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --copt=-Wno-error=unused-but-set-variable --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes
    INFO: Found applicable config definition build:dynamic_kernels in file /home/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/5a3ff2087ab590e6ac9c839c9dc43e520891b7de.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/e10e936315410abd222eb58911b1e20fbfa80baf.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/ruy/archive/3286a34cc8de6149ac6844107dfdffac91531e72.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/openxla/stablehlo/archive/e2aa7fe97cd09f44d864079c4e8be98064e5b425.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/XNNPACK/archive/a50369c0fdd15f0f35b1a91c964644327a88d480.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found
    WARNING: Download from https://golang.org/dl/?mode=json&include=all failed: class java.io.IOException connect timed out
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/cython/cython/archive/3.0.0a11.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
    INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (579 packages loaded, 32070 targets configured).
    INFO: Found 1 target...
    ERROR: /home/tensorflow/tensorflow/compiler/mlir/quantization/tensorflow/debugging/BUILD:11:11: Compiling tensorflow/compiler/mlir/quantization/tensorflow/debugging/mlir_dump.cc failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 159 arguments skipped)
    tensorflow/compiler/mlir/quantization/tensorflow/debugging/mlir_dump.cc: In function 'absl::lts_20220623::StatusOr<std::unique_ptr<llvm::raw_fd_ostream> > tensorflow::quantization::{anonymous}::CreateMlirDumpFile(absl::lts_20220623::string_view)':
    tensorflow/compiler/mlir/quantization/tensorflow/debugging/mlir_dump.cc:93:10: error: could not convert 'dump_file' from 'std::unique_ptr<llvm::raw_fd_ostream>' to 'absl::lts_20220623::StatusOr<std::unique_ptr<llvm::raw_fd_ostream> >'
       return dump_file;
              ^~~~~~~~~
    Target //tensorflow/tools/pip_package:build_pip_package failed to build
    Use --verbose_failures to see the command lines of failed build steps.
    INFO: Elapsed time: 298.379s, Critical Path: 51.14s
    INFO: 8290 processes: 1584 internal, 6706 local.
    FAILED: Build did NOT complete successfully
    
  • Update curl to 7.87.0

    Update curl to 7.87.0

    This PR updates curl to 7.87.0 to fix the following vulnerabilities in previous 7.86.0 inside tensorflow:

    • CVE-2022-43552: HTTP Proxy deny use-after-free 2022-12-21
    • CVE-2022-43551: Another HSTS bypass via IDN 2022-12-21

    See https://curl.se/docs/security.html

    Signed-off-by: Yong Tang [email protected]

  • Segmentation fault when running gen_nn_ops.fractional_avg_pool

    Segmentation fault when running gen_nn_ops.fractional_avg_pool

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    source

    Tensorflow Version

    2.10.0

    Custom Code

    Yes

    OS Platform and Distribution

    Ubuntu 22.04

    Mobile device

    No response

    Python version

    3.9

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    segfault happens with negative list elements.
    

    Standalone code to reproduce the issue

    import tensorflow as tf
    import os
    import numpy as np
    from tensorflow.python.ops import gen_nn_ops
    try:
      arg_0_tensor = tf.random.uniform([5, 20, 30, 3], dtype=tf.float64)
      arg_0 = tf.identity(arg_0_tensor)
      arg_1_0 = 2
      arg_1_1 = -5.267949192431123
      arg_1_2 = -52.58578643762691
      arg_1_3 = 1
      arg_1 = [arg_1_0,arg_1_1,arg_1_2,arg_1_3,]
      arg_2 = True
      arg_3 = True
      deterministic = True
      seed = 87654321
      seed2 = 341261001
      out = gen_nn_ops.fractional_avg_pool(arg_0,arg_1,arg_2,arg_3,deterministic=deterministic,seed=seed,seed2=seed2,)
    except Exception as e:
      print("Error:"+str(e))
    
    import tensorflow as tf
    import os
    import numpy as np
    from tensorflow.python.ops import gen_nn_ops
    try:
      arg_0_tensor = tf.random.uniform([1, 10, 10, 1], dtype=tf.float64)
      arg_0 = tf.identity(arg_0_tensor)
      arg_1_0 = True
      arg_1_1 = -0.35668935305391647
      arg_1_2 = -0.7209753581353426
      arg_1_3 = -87
      arg_1 = [arg_1_0,arg_1_1,arg_1_2,arg_1_3,]
      arg_2 = True
      arg_3 = True
      deterministic = True
      seed = 87654321
      seed2 = 341261001
      out = gen_nn_ops.fractional_avg_pool(arg_0,arg_1,arg_2,arg_3,deterministic=deterministic,seed=seed,seed2=seed2,)
    except Exception as e:
      print("Error:"+str(e))
    
    
    
    ### Relevant log output
    
    ```shell
    2023-01-07 13:44:10.489552: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.493914: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.494017: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.494307: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2023-01-07 13:44:10.494924: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.495025: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.495113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.840688: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.840834: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.840928: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:44:10.841010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4263 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1660 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5
    Error:{{function_node __wrapped__FractionalAvgPool_device_/job:localhost/replica:0/task:0/device:CPU:0}} Fractional average pooling is not yet supported on the batch nor channel dimension. [Op:FractionalAvgPool]
    Error:{{function_node __wrapped__FractionalAvgPool_device_/job:localhost/replica:0/task:0/device:CPU:0}} Both seed and seed2 should be 0 if deterministic is false. [Op:FractionalAvgPool]
    Error:Expected bool for argument 'pseudo_random' not -69.0.
    Error:Value for attr 'T' of uint32 is not in the list of allowed values: float, double, int32, int64
    	; NodeDef: {{node FractionalAvgPool}}; Op<name=FractionalAvgPool; signature=value:T -> output:T, row_pooling_sequence:int64, col_pooling_sequence:int64; attr=pooling_ratio:list(float),min=4; attr=pseudo_random:bool,default=false; attr=overlapping:bool,default=false; attr=deterministic:bool,default=false; attr=seed:int,default=0; attr=seed2:int,default=0; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_INT64]> [Op:FractionalAvgPool]
    Segmentation fault
    
    
    </details>
  • Check failure when running tensorflow.python.ops.gen_experimental_dataset_ops.thread_pool_handle

    Check failure when running tensorflow.python.ops.gen_experimental_dataset_ops.thread_pool_handle

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    source

    Tensorflow Version

    2.10.0

    Custom Code

    Yes

    OS Platform and Distribution

    Ubuntu 22.04

    Mobile device

    No response

    Python version

    3.9

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    Check failure with the following input combination.
    

    Standalone code to reproduce the issue

    import tensorflow as tf
    import os
    import numpy as np
    from tensorflow.python.ops import gen_experimental_dataset_ops
    try:
      num_threads = 0
      max_intra_op_parallelism = 1
      display_name = ""
      shared_name = "same"
      out = gen_experimental_dataset_ops.thread_pool_handle(num_threads=num_threads,max_intra_op_parallelism=max_intra_op_parallelism,display_name=display_name,shared_name=shared_name,)
    except Exception as e:
      print("Error:"+str(e))
    
    
    
    ### Relevant log output
    
    ```shell
    2023-01-07 13:40:27.789758: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:27.794099: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:27.794201: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:27.794494: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2023-01-07 13:40:27.795201: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:27.795303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:27.795393: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:28.137882: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:28.138023: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:28.138116: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:40:28.138198: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4263 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1660 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5
    Error:Expected string for argument 'shared_name' not -1.
    2023-01-07 13:40:28.172655: W tensorflow/core/framework/op_kernel.cc:1757] OP_REQUIRES failed at threadpool_dataset_op.cc:102 : INVALID_ARGUMENT: `num_threads` must be >= 0
    Error:{{function_node __wrapped__ThreadPoolHandle_device_/job:localhost/replica:0/task:0/device:CPU:0}} `num_threads` must be >= 0 [Op:ThreadPoolHandle]
    2023-01-07 13:40:28.180684: W tensorflow/core/framework/op_kernel.cc:1757] OP_REQUIRES failed at threadpool_dataset_op.cc:102 : INVALID_ARGUMENT: `num_threads` must be >= 0
    Error:{{function_node __wrapped__ThreadPoolHandle_device_/job:localhost/replica:0/task:0/device:CPU:0}} `num_threads` must be >= 0 [Op:ThreadPoolHandle]
    2023-01-07 13:40:28.187042: W tensorflow/core/framework/op_kernel.cc:1757] OP_REQUIRES failed at threadpool_dataset_op.cc:102 : INVALID_ARGUMENT: `num_threads` must be >= 0
    Error:{{function_node __wrapped__ThreadPoolHandle_device_/job:localhost/replica:0/task:0/device:CPU:0}} `num_threads` must be >= 0 [Op:ThreadPoolHandle]
    Error:Expected string for argument 'display_name' not False.
    2023-01-07 13:40:28.193170: F tensorflow/core/platform/threadpool.cc:99] Check failed: num_threads >= 1 (1 vs. 0)
    Aborted
    
    
    </details>
  • Illegal memory access when running math_ops.sparse_segment_sum

    Illegal memory access when running math_ops.sparse_segment_sum

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    source

    Tensorflow Version

    2.10.0

    Custom Code

    Yes

    OS Platform and Distribution

    Ubuntu 22.04

    Mobile device

    No response

    Python version

    3.9

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    Illegal memory access when running with the following input combination.
    

    Standalone code to reproduce the issue

    import tensorflow as tf
    import os
    import numpy as np
    from tensorflow.python.ops import math_ops
    try:
      data_tensor = tf.random.uniform([10, 4], dtype=tf.float32)
      data = tf.identity(data_tensor)
      indices_0 = 8
      indices_1 = 3
      indices_2 = 0
      indices_3 = 9
      indices = [indices_0,indices_1,indices_2,indices_3,]
      segment_ids_0 = 1
      segment_ids_1 = 2
      segment_ids_2 = 2
      segment_ids_3 = 2
      segment_ids = [segment_ids_0,segment_ids_1,segment_ids_2,segment_ids_3,]
      out = math_ops.sparse_segment_sum(data=data,indices=indices,segment_ids=segment_ids,)
    except Exception as e:
      print("Error:"+str(e))
    
    
    
    ### Relevant log output
    
    ```shell
    2023-01-07 13:35:27.718173: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:27.722459: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:27.722561: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:27.722861: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
    To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2023-01-07 13:35:27.723830: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:27.723935: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:27.724027: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:28.065156: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:28.065293: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:28.065386: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2023-01-07 13:35:28.065467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4268 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1660 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5
    Error:{{function_node __wrapped__SparseSegmentSum_device_/job:localhost/replica:0/task:0/device:CPU:0}} Bad: indices[0] == -1 out of range [0, 10) [Op:SparseSegmentSum]
    Error:Value for attr 'Tsegmentids' of float is not in the list of allowed values: int32, int64
    	; NodeDef: {{node SparseSegmentSum}}; Op<name=SparseSegmentSum; signature=data:T, indices:Tidx, segment_ids:Tsegmentids -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_INT64, DT_BFLOAT16, DT_UINT16, DT_HALF, DT_UINT32, DT_UINT64]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]; attr=Tsegmentids:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]> [Op:SparseSegmentSum]
    Error:Value for attr 'T' of complex64 is not in the list of allowed values: float, double, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64
    	; NodeDef: {{node SparseSegmentSum}}; Op<name=SparseSegmentSum; signature=data:T, indices:Tidx, segment_ids:Tsegmentids -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_INT64, DT_BFLOAT16, DT_UINT16, DT_HALF, DT_UINT32, DT_UINT64]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]; attr=Tsegmentids:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]> [Op:SparseSegmentSum]
    Error:{{function_node __wrapped__SparseSegmentSum_device_/job:localhost/replica:0/task:0/device:GPU:0}} segment ids must be >= 0 [Op:SparseSegmentSum]
    Error:Value for attr 'Tsegmentids' of float is not in the list of allowed values: int32, int64
    	; NodeDef: {{node SparseSegmentSum}}; Op<name=SparseSegmentSum; signature=data:T, indices:Tidx, segment_ids:Tsegmentids -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_INT64, DT_BFLOAT16, DT_UINT16, DT_HALF, DT_UINT32, DT_UINT64]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]; attr=Tsegmentids:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]> [Op:SparseSegmentSum]
    Error:Can't convert Python sequence with mixed types to Tensor.
    Error:Value for attr 'Tsegmentids' of float is not in the list of allowed values: int32, int64
    	; NodeDef: {{node SparseSegmentSum}}; Op<name=SparseSegmentSum; signature=data:T, indices:Tidx, segment_ids:Tsegmentids -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_INT64, DT_BFLOAT16, DT_UINT16, DT_HALF, DT_UINT32, DT_UINT64]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]; attr=Tsegmentids:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]> [Op:SparseSegmentSum]
    Error:Value for attr 'T' of complex128 is not in the list of allowed values: float, double, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64
    	; NodeDef: {{node SparseSegmentSum}}; Op<name=SparseSegmentSum; signature=data:T, indices:Tidx, segment_ids:Tsegmentids -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_INT64, DT_BFLOAT16, DT_UINT16, DT_HALF, DT_UINT32, DT_UINT64]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]; attr=Tsegmentids:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]> [Op:SparseSegmentSum]
    Error:Value for attr 'Tidx' of float is not in the list of allowed values: int32, int64
    	; NodeDef: {{node SparseSegmentSum}}; Op<name=SparseSegmentSum; signature=data:T, indices:Tidx, segment_ids:Tsegmentids -> output:T; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, DT_INT8, DT_INT64, DT_BFLOAT16, DT_UINT16, DT_HALF, DT_UINT32, DT_UINT64]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]; attr=Tsegmentids:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]> [Op:SparseSegmentSum]
    Error:{{function_node __wrapped__SparseSegmentSum_device_/job:localhost/replica:0/task:0/device:GPU:0}} segment ids must be >= 0 [Op:SparseSegmentSum]
    Error:{{function_node __wrapped__SparseSegmentSum_device_/job:localhost/replica:0/task:0/device:GPU:0}} segment ids must be >= 0 [Op:SparseSegmentSum]
    Error:{{function_node __wrapped__SparseSegmentSum_device_/job:localhost/replica:0/task:0/device:CPU:0}} Bad: indices[2] == -2 out of range [0, 10) [Op:SparseSegmentSum]
    Error:{{function_node __wrapped__SparseSegmentSum_device_/job:localhost/replica:0/task:0/device:GPU:0}} segment ids must be >= 0 [Op:SparseSegmentSum]
    2023-01-07 13:35:28.213370: E tensorflow/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
    2023-01-07 13:35:28.213399: F tensorflow/core/common_runtime/device/device_event_mgr.cc:221] Unexpected Event status: 1
    Aborted
    
    
    </details>
An Open Source Machine Learning Framework for Everyone
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

Feb 13, 2021
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

Nov 9, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

Dec 26, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

May 31, 2022
Open source annotation tool for machine learning practitioners.
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

Jan 1, 2023
This is an open source library implementing hyperbox-based machine learning algorithms
This is an open source library implementing hyperbox-based machine learning algorithms

hyperbox-brain is a Python open source toolbox implementing hyperbox-based machine learning algorithms built on top of scikit-learn and is distributed

Dec 14, 2022
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Jan 7, 2023
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Dec 30, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Jan 9, 2023
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Jan 6, 2023
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

Nov 6, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

Dec 30, 2022
MediaPipe is a an open-source framework from Google for building multimodal
MediaPipe is a an open-source framework from Google for building multimodal

MediaPipe is a an open-source framework from Google for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. It is performance optimized with end-to-end on device inference in mind.

Sep 30, 2022
ObjectDetNet is an easy, flexible, open-source object detection framework

Getting started with the ObjectDetNet ObjectDetNet is an easy, flexible, open-source object detection framework which allows you to easily train, resu

Aug 25, 2020
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

Dec 30, 2022
OSLO: Open Source framework for Large-scale transformer Optimization
OSLO: Open Source framework for Large-scale transformer Optimization

O S L O Open Source framework for Large-scale transformer Optimization What's New: December 21, 2021 Released OSLO 1.0. What is OSLO about? OSLO is a

Nov 24, 2022
Nov 28, 2022