An Open Source Machine Learning Framework for Everyone

Python PyPI

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.

TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Discuss for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development:

Fuzzing Status CII Best Practices Contributor Covenant

Continuous build status

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Nightly GCS Official GCS
Libtensorflow Linux CPU Status Nightly GCS Official GCS
Libtensorflow Linux GPU Status Nightly GCS Official GCS
Libtensorflow Windows CPU Status Nightly GCS Official GCS
Libtensorflow Windows GPU Status Nightly GCS Official GCS

Community Supported Builds

Build Type Status Artifacts
Linux AMD ROCm GPU Nightly Build Status Nightly
Linux AMD ROCm GPU Stable Release Build Status Release 1.15 / 2.x
Linux s390x Nightly Build Status Nightly
Linux s390x CPU Stable Release Build Status Release
Linux ppc64le CPU Nightly Build Status Nightly
Linux ppc64le CPU Stable Release Build Status Release 1.15 / 2.x
Linux ppc64le GPU Nightly Build Status Nightly
Linux ppc64le GPU Stable Release Build Status Release 1.15 / 2.x
Linux aarch64 CPU Nightly (Linaro) Build Status Nightly
Linux aarch64 CPU Stable Release (Linaro) Build Status Release 1.x & 2.x
Linux aarch64 CPU Nightly (OpenLab)
Python 3.6
Build Status Nightly
Linux aarch64 CPU Stable Release (OpenLab) Build Status Release 1.15 / 2.x
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Nightly Build Status Nightly
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Stable Release Build Status Release 1.15 / 2.x
Red Hat® Enterprise Linux® 7.6 CPU & GPU
Python 2.7, 3.6
Build Status 1.13.1 PyPI

Community Supported Containers

Container Type Status Artifacts
TensorFlow aarch64 Neoverse-N1 CPU Stable (Linaro)
Debian
Static Release 2.3

Resources

Learn more about the TensorFlow community and how to contribute.

License

Apache License 2.0

Comments
  • DenseNet PB and TFLite models produce inaccurate results

    DenseNet PB and TFLite models produce inaccurate results

    Click to expand!

    Issue Type

    Performance

    Source

    binary

    Tensorflow Version

    TF 2.8.0

    Custom Code

    No

    OS Platform and Distribution

    Linux Ubuntu 20.04 4LTS

    Mobile device

    No response

    Python version

    3.8

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    Intel UHD Graphics 620

    Current Behaviour?

    I am using the slim DenseNet models provided [here](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/densenet_2018_04_27.tgz), building and loading them using [Apache TVM](https://tvm.apache.org/).
    
    Although I applied the [indicated preprocessing](https://github.com/tensorflow/models/tree/master/research/slim/preprocessing), it seems that the models produce results much different from the ground truths expected in a small validation dataset I use, and they are consistent to this across each other.
    
    Also, they tend to predict similar ImageNet labels (e.g., IDs 1000, 340 and 341 are common part to almost all top-5 classification predictions they produce).
    
    I suspect this is related to preprocessing somehow, but I have followed the process step by step and the results are the same.
    

    Standalone code to reproduce the issue

    1) Build model in TVM modules (one for PB and one for TFLite)
    2) Execute the modules, applying preprocessing.
    3) Compare top-5 predictions with dataset ground truths.
    

    Relevant log output

    No response

  • tf.keras.datasets.cifar10.load_data(path='cifar-10-python.tar.gz')

    tf.keras.datasets.cifar10.load_data(path='cifar-10-python.tar.gz')

    Click to expand!

    Issue Type

    Feature Request

    Source

    binary

    Tensorflow Version

    2.8.0

    Custom Code

    No

    OS Platform and Distribution

    Windows 10

    Mobile device

    No response

    Python version

    3.10

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    The tf.keras.datasets.mnist.load_data(path='/user/.keras/datasets/mnist.npz') works fine, but the tf.keras.datasets.cifar10.load_data() have no path param, which means everytime everyone runs his/her jupyter scripts that requires cifar10 datasets, he/she had to download the datasets from source again, it's really boring and time-consuming :)
    

    Standalone code to reproduce the issue

    import autokeras as ak
    import matplotlib.pyplot as plt
    from tensorflow.keras.datasets import cifar10
    
    (x_train, y_train), (x_test, y_test) = cifar10.load_data()
    

    Relevant log output

    No response

  • `tf.image.resize` different result when inside a `tf.function`

    `tf.image.resize` different result when inside a `tf.function`

    Click to expand!

    Issue Type

    Bug

    Source

    binary

    Tensorflow Version

    v2.9.0-rc2-42-g8a20d54a3c1 2.9.0

    Custom Code

    No

    OS Platform and Distribution

    Linux Ubuntu 18.04

    Mobile device

    No response

    Python version

    3.8

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    When you put a `tf.image.resize` op inside a `tf.function` whilst using `tf.RaggedTensor`s, the result changes.
    

    Standalone code to reproduce the issue

    
    import numpy as np
    import tensorflow as tf
    np.random.seed(0)
    batch1 = tf.cast(tf.ragged.constant([255*np.random.uniform(size=(2000, 2000))]), tf.uint8)
    batch1 = tf.expand_dims(batch1, axis=-1)
    batch1 = tf.concat([batch1, batch1, batch1], axis=-1)
    
    sign = tf.RaggedTensorSpec((1, None, None, 3), tf.uint8, 2, tf.int64)
    
    @tf.function(input_signature=(sign,))
    def resize_tf(images):
      return tf.image.resize(images, (50, 50)) / 255.
      
    def resize_non_tf(images):
      return tf.image.resize(images, (50, 50)) / 255.
      
    print(tf.reduce_mean(resize_tf(batch1)))
    print(tf.reduce_mean(resize_non_tf(batch1)))
    
    and then run `python3 test.py`
    

    Relevant log output

    tf.Tensor(0.49723607, shape=(), dtype=float32)
    tf.Tensor(0.497236, shape=(), dtype=float32)
    
  • GPU and NNAPI Delegate not available due to

    GPU and NNAPI Delegate not available due to "dynamic-sized tensors "

    Click to expand!

    Issue Type

    Support

    Source

    binary

    Tensorflow Version

    tf 2.5.0

    Custom Code

    Yes

    OS Platform and Distribution

    Windows 11

    Mobile device

    Android 10

    Python version

    3.6.0

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    Not used.

    GPU model and memory

    No response

    Current Behaviour?

    I am developing an RNN to process some time sequences. I am using both LSTM and GRU and need to convert the trained model to Tflite to use it under Android (version 10).
    The problem is that despite my best efforts, I am unable to use the GPU and NNAPI delegates under the android device I am using due to the exception that is thrown:
    
    java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
    
    I state that I am not using CUDA / cuDNN since the laptop I am using does not have a GPU and the model I train is relatively simple.
    
    I don't know if this depends on the batch size set to None, on the tensorflow version, on the tf.lite.OpsSet.SELECT_TF_OPS option being converted, or simply on the fact that this type of layers (LSTM, GRU) cannot support these types of delegates.
    

    Standalone code to reproduce the issue

    model = tf.keras.models.Sequential([                            
                tf.keras.layers.InputLayer(input_shape=(21 * 3 * 2 * 21, )),
                tf.keras.layers.Reshape((21, 3 * 2 * 21)),        
                tf.keras.layers.LSTM(32, return_sequences=True),
                tf.keras.layers.Dropout(0.40, seed=42),
                tf.keras.layers.LSTM(32),        
                tf.keras.layers.Dropout(0.50, seed=42),             
                tf.keras.layers.Dense(27, activation='softmax')])    
               
        # Model Compile
        model.compile(
            optimizer='adam',
            loss='sparse_categorical_crossentropy',
            metrics=['accuracy']
        )
            history = model.fit(
                X_train,
                y_train,
                epochs=2000,
                batch_size=256,
                validation_data=(X_val, y_val),
                callbacks=callbacks,
                shuffle=True,
            )
    
        model.save(model_save_path, include_optimizer=False)   
        model = tf.keras.models.load_model(model_save_path)
    
        converter = tf.lite.TFLiteConverter.from_keras_model(model)  
        converter.optimizations = [tf.lite.Optimize.DEFAULT]
        converter.experimental_new_converter=True
        converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS]
    
        tflite_quantized_model = converter.convert()
    
        open(tflite_save_path, 'wb').write(tflite_quantized_model)
    
        interpreter = tf.lite.Interpreter(model_path=tflite_save_path)
        interpreter.allocate_tensors()
    

    Relevant log output

    java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
    
  • C++ compilation of rule '//tensorflow/core/kernels/image:extract_image_patches_op' failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command

    C++ compilation of rule '//tensorflow/core/kernels/image:extract_image_patches_op' failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command

    Click to expand!

    Issue Type

    Build/Install

    Source

    source

    Tensorflow Version

    tf2.7

    Custom Code

    Yes

    OS Platform and Distribution

    linux

    Mobile device

    No response

    Python version

    python3.7.12

    Bazel version

    3.7.2

    GCC/Compiler version

    7.3.1

    CUDA/cuDNN version

    rocm5.1

    GPU model and memory

    No response

    Current Behaviour?

    A bug happened!  when i compile TF2.7 form source code
    

    Standalone code to reproduce the issue

    bazel build -c opt --config=rocm //tensorflow/tools/pip_package:build_pip_package --verbose_failures
    

    Relevant log output

    ERROR: /data/jenkins_workspace/workspace/tensorflow2x_release_bak/tensorflow/core/kernels/image/BUILD:241:18: C++ compilation of rule '//tensorflow/core/kernels/image:extract_image_patches_op' failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command 
      (cd /root/.cache/bazel/_bazel_root/13db7a7ceaa19d7f3d62a2ad5999e2b1/execroot/org_tensorflow && \
      exec env - \
        LD_LIBRARY_PATH=/opt/dtk-22.04/hip/lib:/opt/dtk-22.04/llvm/lib:/opt/dtk-22.04/lib:/opt/dtk-22.04/lib64: \
        PATH=/data/jenkins_workspace/workspace/tensorflow2x_release_bak/Depend/bin:/opt/dtk-22.04/bin:/opt/dtk-22.04/llvm/bin:/opt/dtk-22.04/hip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
        PWD=/proc/self/cwd \
        PYTHON_BIN_PATH=/usr/bin/python3 \
        PYTHON_LIB_PATH=/usr/local/python3.7.12/lib/python3.7/site-packages \
        ROCBLAS_TENSILE_LIBPATH=/opt/dtk-22.04/lib/library \
        ROCM_PATH=/opt/dtk-22.04 \
        TF2_BEHAVIOR=1 \
      external/local_config_rocm/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++14' -MD -MF bazel-out/k8-opt/bin/tensorflow/core/kernels/image/_objs/extract_image_patches_op/extract_image_patches_op.pic.d '-frandom-seed=bazel-out/k8-opt/bin/tensorflow/core/kernels/image/_objs/extract_image_patches_op/extract_image_patches_op.pic.o' -fPIC -DTENSORFLOW_USE_CUSTOM_CONTRACTION_KERNEL -DTENSORFLOW_USE_MKLDNN_CONTRACTION_KERNEL '-DEIGEN_ALTIVEC_USE_CUSTOM_PACK=0' -DEIGEN_MPL2_ONLY '-DEIGEN_MAX_ALIGN_BYTES=64' -iquote . -iquote bazel-out/k8-opt/bin -iquote external/com_google_absl -iquote bazel-out/k8-opt/bin/external/com_google_absl -iquote external/nsync -iquote bazel-out/k8-opt/bin/external/nsync -iquote external/eigen_archive -iquote bazel-out/k8-opt/bin/external/eigen_archive -iquote external/gif -iquote bazel-out/k8-opt/bin/external/gif -iquote external/libjpeg_turbo -iquote bazel-out/k8-opt/bin/external/libjpeg_turbo -iquote external/com_google_protobuf -iquote bazel-out/k8-opt/bin/external/com_google_protobuf -iquote external/com_googlesource_code_re2 -iquote bazel-out/k8-opt/bin/external/com_googlesource_code_re2 -iquote external/farmhash_archive -iquote bazel-out/k8-opt/bin/external/farmhash_archive -iquote external/fft2d -iquote bazel-out/k8-opt/bin/external/fft2d -iquote external/highwayhash -iquote bazel-out/k8-opt/bin/external/highwayhash -iquote external/zlib -iquote bazel-out/k8-opt/bin/external/zlib -iquote external/local_config_rocm -iquote bazel-out/k8-opt/bin/external/local_config_rocm -iquote external/png -iquote bazel-out/k8-opt/bin/external/png -iquote external/mkl_dnn_v1 -iquote bazel-out/k8-opt/bin/external/mkl_dnn_v1 -iquote external/double_conversion -iquote bazel-out/k8-opt/bin/external/double_conversion -iquote external/local_config_cuda -iquote bazel-out/k8-opt/bin/external/local_config_cuda -iquote external/local_config_tensorrt -iquote bazel-out/k8-opt/bin/external/local_config_tensorrt -Ibazel-out/k8-opt/bin/external/local_config_cuda/cuda/_virtual_includes/cuda_headers_virtual -Ibazel-out/k8-opt/bin/external/local_config_tensorrt/_virtual_includes/tensorrt_headers -isystem external/nsync/public -isystem bazel-out/k8-opt/bin/external/nsync/public -isystem third_party/eigen3/mkl_include -isystem bazel-out/k8-opt/bin/third_party/eigen3/mkl_include -isystem external/eigen_archive -isystem bazel-out/k8-opt/bin/external/eigen_archive -isystem external/gif -isystem bazel-out/k8-opt/bin/external/gif -isystem external/com_google_protobuf/src -isystem bazel-out/k8-opt/bin/external/com_google_protobuf/src -isystem external/farmhash_archive/src -isystem bazel-out/k8-opt/bin/external/farmhash_archive/src -isystem external/zlib -isystem bazel-out/k8-opt/bin/external/zlib -isystem external/local_config_rocm/rocm -isystem bazel-out/k8-opt/bin/external/local_config_rocm/rocm -isystem external/local_config_rocm/rocm/rocm/include -isystem bazel-out/k8-opt/bin/external/local_config_rocm/rocm/rocm/include -isystem external/local_config_rocm/rocm/rocm/include/rocrand -isystem bazel-out/k8-opt/bin/external/local_config_rocm/rocm/rocm/include/rocrand -isystem external/local_config_rocm/rocm/rocm/include/roctracer -isystem bazel-out/k8-opt/bin/external/local_config_rocm/rocm/rocm/include/roctracer -isystem external/png -isystem bazel-out/k8-opt/bin/external/png -isystem external/mkl_dnn_v1/include -isystem bazel-out/k8-opt/bin/external/mkl_dnn_v1/include -isystem external/mkl_dnn_v1/src -isystem bazel-out/k8-opt/bin/external/mkl_dnn_v1/src -isystem external/mkl_dnn_v1/src/common -isystem bazel-out/k8-opt/bin/external/mkl_dnn_v1/src/common -isystem external/mkl_dnn_v1/src/common/ittnotify -isystem bazel-out/k8-opt/bin/external/mkl_dnn_v1/src/common/ittnotify -isystem external/mkl_dnn_v1/src/cpu -isystem bazel-out/k8-opt/bin/external/mkl_dnn_v1/src/cpu -isystem external/mkl_dnn_v1/src/cpu/gemm -isystem bazel-out/k8-opt/bin/external/mkl_dnn_v1/src/cpu/gemm -isystem external/mkl_dnn_v1/src/cpu/x64/xbyak -isystem bazel-out/k8-opt/bin/external/mkl_dnn_v1/src/cpu/x64/xbyak -isystem external/double_conversion -isystem bazel-out/k8-opt/bin/external/double_conversion -isystem external/local_config_cuda/cuda -isystem bazel-out/k8-opt/bin/external/local_config_cuda/cuda -isystem external/local_config_cuda/cuda/cuda/include -isystem bazel-out/k8-opt/bin/external/local_config_cuda/cuda/cuda/include -w -DAUTOLOAD_DYNAMIC_KERNELS '-std=c++14' -DEIGEN_AVOID_STL_ARRAY -Iexternal/gemmlowp -Wno-sign-compare '-ftemplate-depth=900' -fno-exceptions '-DTENSORFLOW_USE_XLA=1' '-DTENSORFLOW_USE_ROCM=1' -DINTEL_MKL -msse3 -pthread '-DTENSORFLOW_USE_ROCM=1' '-DTENSORFLOW_USE_XLA=1' '-DINTEL_MKL=1' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' '-DTENSORFLOW_USE_ROCM=1' -D__HIP_PLATFORM_HCC__ -DEIGEN_USE_HIP -no-canonical-prefixes -fno-canonical-system-headers -c tensorflow/core/kernels/image/extract_image_patches_op.cc -o bazel-out/k8-opt/bin/tensorflow/core/kernels/image/_objs/extract_image_patches_op/extract_image_patches_op.pic.o)
    Execution platform: @local_execution_config_platform//:platform
    In file included from external/eigen_archive/unsupported/Eigen/CXX11/Tensor:97:0,
                     from ./third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1,
                     from ./tensorflow/core/kernels/image/extract_image_patches_op.h:19,
                     from tensorflow/core/kernels/image/extract_image_patches_op.cc:21:
    external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorImagePatch.h: In static member function 'static void Eigen::internal::EvalRange<Evaluator, StorageIndex, true>::run(Evaluator*, StorageIndex, StorageIndex) [with Evaluator = Eigen::TensorEvaluator<const Eigen::TensorAssignOp<Eigen::TensorMap<Eigen::Tensor<std::complex<float>, 4, 1, long int>, 16, Eigen::MakePointer>, const Eigen::TensorReshapingOp<const Eigen::DSizes<long int, 4>, const Eigen::TensorImagePatchOp<-1, -1, const Eigen::TensorMap<Eigen::Tensor<const std::complex<float>, 4, 1, long int>, 16, Eigen::MakePointer> > > >, Eigen::ThreadPoolDevice>; StorageIndex = long int]':
    external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorImagePatch.h:546:7: internal compiler error: in emit_move_insn, at expr.c:3698
           values[i] = coeff(index+i);
           ^~~~~~
    Please submit a full bug report,
    with preprocessed source if appropriate.
    See <https://gcc.gnu.org/bugs/> for instructions.
    Target //tensorflow/tools/pip_package:build_pip_package failed to build
    INFO: Elapsed time: 4326.567s, Critical Path: 534.32s
    INFO: 11449 processes: 1314 internal, 10135 local.
    FAILED: Build did NOT complete successfully
    FAILED: Build did NOT complete successfully
    
  • In Pose detection when I use front camera it display opposite mirror how to resolve it?

    In Pose detection when I use front camera it display opposite mirror how to resolve it?

    Click to expand!

    Issue Type

    Bug

    Source

    source

    Tensorflow Version

    pod 'TensorFlowLiteSwift', '~> 0.0.1-nightly', :subspecs => ['CoreML', 'Metal']

    Custom Code

    No

    OS Platform and Distribution

    MacOS

    Mobile device

    iPhone X

    Python version

    No response

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    In Pose detection when I use front camera it display opposite mirror how to resolve it? also camera resolution also need to clear.
    

    Standalone code to reproduce the issue

    Must display proper mirror frame & resolution when front camera start ie. iPhone default camera app front camera mirror & resolution.
    

    Relevant log output

    No response

An Open Source Machine Learning Framework for Everyone
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

Feb 13, 2021
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cross-device use-cases over FEDn networks.

May 9, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

May 11, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

Apr 8, 2022
Open source annotation tool for machine learning practitioners.
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

May 21, 2022
This is an open source library implementing hyperbox-based machine learning algorithms
This is an open source library implementing hyperbox-based machine learning algorithms

hyperbox-brain is a Python open source toolbox implementing hyperbox-based machine learning algorithms built on top of scikit-learn and is distributed

May 19, 2022
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

May 15, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

May 4, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

May 15, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

May 23, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

May 3, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

May 17, 2022
MediaPipe is a an open-source framework from Google for building multimodal
MediaPipe is a an open-source framework from Google for building multimodal

MediaPipe is a an open-source framework from Google for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. It is performance optimized with end-to-end on device inference in mind.

Nov 14, 2021
ObjectDetNet is an easy, flexible, open-source object detection framework

Getting started with the ObjectDetNet ObjectDetNet is an easy, flexible, open-source object detection framework which allows you to easily train, resu

Aug 25, 2020
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

May 3, 2022
OSLO: Open Source framework for Large-scale transformer Optimization
OSLO: Open Source framework for Large-scale transformer Optimization

O S L O Open Source framework for Large-scale transformer Optimization What's New: December 21, 2021 Released OSLO 1.0. What is OSLO about? OSLO is a

May 15, 2022
Apr 15, 2022