Stars
2
Forks
0
Language
C
Last Updated
Aug 09, 2023
Similar Repos
Repo | Language | Stars | Description | Updated At |
---|---|---|---|---|
C++ | 5 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 09, 2022 | |
C++ | 15286 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 22, 2022 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 05, 2023 | |
None | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 12, 2022 | |
C++ | 89 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 08, 2022 | |
C | 11 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 20, 2023 | |
C | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Dec 07, 2022 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Feb 16, 2023 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 28, 2022 | |
C++ | 13 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 28, 2022 | |
C++ | 8 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 20, 2020 | |
C++ | 3 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Oct 19, 2023 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 12, 2024 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 21, 2023 | |
CMake | 73 | The benchmark of ncnn that is a high-performance neural network inference framework optimized for the … | Apr 23, 2023 | |
C++ | 57 | ROS wrapper for NCNN neural inference framework | May 11, 2023 | |
Perl | 2 | High Performance Computing optimized orthology inference | Nov 14, 2022 | |
None | 4 | High-efficiency floating-point neural network inference operators for mobile, server, and Web | Jul 12, 2022 | |
C | 1190 | High-efficiency floating-point neural network inference operators for mobile, server, and Web | Aug 30, 2022 | |
C++ | 716 | dabnn is an accelerated binary neural networks inference framework for mobile platform | Aug 12, 2022 | |
C++ | 3 | dabnn is an accelerated binary neural networks inference framework for mobile platform | May 14, 2021 | |
C++ | 78 | A High Performance, Network Optimized, JSON Library | Apr 14, 2022 | |
C++ | 323 | Benchmarking Neural Network Inference on Mobile Devices | Jul 21, 2022 | |
C++ | 2 | Benchmarking Neural Network Inference on Mobile Devices | May 14, 2021 | |
C++ | 9 | GUI demo of deep neural network inference with ncnn and imgui | Jul 18, 2022 | |
C++ | 42 | High-performance operations for neural network potentials | Aug 12, 2022 | |
C | 10 | a simple neural network inference framework | May 14, 2023 | |
C | 1452 | Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators | Aug 04, 2022 | |
C++ | 57 | ShaderNN is a lightweight deep learning inference framework optimized for Convolutional Neural Networks on mobile … | May 22, 2023 | |
Shell | 15 | Simple High performance Infrastructure for Neural network Experiments | Feb 27, 2023 | |
C++ | 1181 | FeatherCNN is a high performance inference engine for convolutional neural networks. | Aug 03, 2022 | |
C++ | 2 | FeatherCNN is a high performance inference engine for convolutional neural networks. | May 14, 2021 | |
HTML | 2 | Simple script to bencharm mobile inference framework (TFLite, MNN, ncnn and etc.) | Mar 19, 2021 | |
Python | 502 | Neural Network Compression Framework for enhanced OpenVINO™ inference | Aug 09, 2022 | |
C | 15 | high performance network framework with xdp | Apr 04, 2023 | |
Erlang | 169 | High-Performance Erlang Network Client Framework | May 04, 2023 | |
Python | 4 | Modular performance benchmarking framework for neural network simulations | Sep 11, 2022 | |
C++ | 225 | Highly optimized inference engine for Binarized Neural Networks | Mar 31, 2023 | |
Cuda | 5 | TenTrans High-Performance Inference Toolkit | Jun 10, 2022 | |
C++ | 4666 | MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms. | Aug 13, 2022 | |
None | 2 | MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms. | Mar 05, 2023 | |
C++ | 2 | MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms. | Jun 15, 2021 | |
HTML | 13 | High performance video for the mobile web | Dec 23, 2021 | |
MATLAB | 3 | Gripon-Berrou Neural Network (also called Cliques Neural Network) high-performance implementation in Octave/Matlab | Jul 31, 2022 | |
Python | 3 | High energy physics, python-based, neural-network framework | May 13, 2022 | |
Jupyter Notebook | 21 | Neural Network Bayesian Kernel Inference. | Oct 17, 2022 | |
Python | 5 | Likelihood Inference Neural Network Accelerator | Mar 07, 2023 | |
None | 2 | A cross-platform asynchronous high-performance C framework. | Nov 26, 2021 | |
Python | 517 | Generate a quantization parameter file for ncnn framework int8 inference | May 14, 2023 | |
None | 2 | FluidSharp is a high performance mobile first multi-platform UI layout framework based on Skia. | May 07, 2023 |