Stars
2905
Forks
169
Language
Python
Last Updated
Jun 03, 2024
Similar Repos
Repo | Language | Stars | Description | Updated At |
---|---|---|---|---|
Python | 29 | Simple inference codes for Neural Network (AI) models | Mar 20, 2023 | |
C++ | 3 | MNN is a lightweight deep neural network inference engine. | May 14, 2021 | |
None | 2 | MNN is a lightweight deep neural network inference engine. | Oct 15, 2021 | |
C++ | 1181 | FeatherCNN is a high performance inference engine for convolutional neural networks. | Aug 03, 2022 | |
C++ | 2 | FeatherCNN is a high performance inference engine for convolutional neural networks. | May 14, 2021 | |
Jupyter Notebook | 21 | Neural Network Bayesian Kernel Inference. | Oct 17, 2022 | |
Python | 5 | Likelihood Inference Neural Network Accelerator | Mar 07, 2023 | |
Python | 4 | Docker based GPU inference of machine learning models | Mar 22, 2023 | |
C++ | 4 | integrate neural network inference into OpenFOAM | Jun 01, 2022 | |
Python | 5 | Neural network inference on serverless architecture | May 09, 2021 | |
C | 10 | a simple neural network inference framework | May 14, 2023 | |
Python | 2306 | AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. … | Oct 17, 2022 | |
Python | 3 | AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. … | Apr 09, 2023 | |
Python | 2 | AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. … | Mar 12, 2024 | |
C++ | 781 | Fast inference engine for Transformer models | Apr 25, 2023 | |
CMake | 73 | The benchmark of ncnn that is a high-performance neural network inference framework optimized for the … | Apr 23, 2023 | |
C++ | 5 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 09, 2022 | |
C++ | 15286 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 22, 2022 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 05, 2023 | |
None | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 12, 2022 | |
C | 3 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 25, 2022 | |
C++ | 89 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 08, 2022 | |
C | 11 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 20, 2023 | |
C | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Dec 07, 2022 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Feb 16, 2023 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 28, 2022 | |
C++ | 13 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 28, 2022 | |
C++ | 8 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 20, 2020 | |
C++ | 3 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Oct 19, 2023 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 12, 2024 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 21, 2023 | |
Jupyter Notebook | 2 | Materials for RCC workshop "Performance Guidelines of Neural Network Models Part 2 | Jan 10, 2022 | |
C++ | 323 | Benchmarking Neural Network Inference on Mobile Devices | Jul 21, 2022 | |
C++ | 2 | Benchmarking Neural Network Inference on Mobile Devices | May 14, 2021 | |
C# | 106 | GPU-based neural network implementation in Unity. | Apr 08, 2023 | |
C++ | 3 | GPU-accelerated, C/C++ neural network library. | Mar 16, 2023 | |
Rust | 11 | A gpu accelerated neural network Rust crate. | Dec 26, 2022 | |
C++ | 2 | Custom C++ inference engine for OpenNMT models | Apr 09, 2023 | |
Jupyter Notebook | 3 | Releasing Artificial Neural Network Models | Jul 24, 2022 | |
C++ | 506 | High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices. | Jul 29, 2022 | |
Python | 413 | A GPU performance profiling tool for PyTorch models | Aug 09, 2022 | |
Python | 15 | A GPU performance profiling tool for PyTorch models | May 19, 2023 | |
C++ | 225 | Highly optimized inference engine for Binarized Neural Networks | Mar 31, 2023 | |
C++ | 2 | Artificial Neural Network Interface Engine | Dec 26, 2018 | |
C++ | 2 | Artificial Neural Network Interface Engine | Nov 30, 2018 | |
C++ | 10 | Private and Reliable Neural Network Inference (CCS '22) | Apr 08, 2023 | |
Python | 11 | A fast minimally-intrusive neural network inference library | Apr 13, 2023 | |
Jupyter Notebook | 2 | Using convolution neural network detect class | Apr 08, 2022 | |
C++ | 17 | Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused) | Oct 12, 2022 | |
C++ | 8 | Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused) | Apr 24, 2023 |