Stars
2
Forks
0
Language
C++
Last Updated
May 14, 2021
Similar Repos
Repo | Language | Stars | Description | Updated At |
---|---|---|---|---|
C++ | 323 | Benchmarking Neural Network Inference on Mobile Devices | Jul 21, 2022 | |
Jupyter Notebook | 21 | Neural Network Bayesian Kernel Inference. | Oct 17, 2022 | |
Python | 5 | Likelihood Inference Neural Network Accelerator | Mar 07, 2023 | |
Python | 13 | EdgeAI Deep Neural Network Models Benchmarking | Jul 16, 2022 | |
C++ | 4 | integrate neural network inference into OpenFOAM | Jun 01, 2022 | |
Python | 5 | Neural network inference on serverless architecture | May 09, 2021 | |
C | 10 | a simple neural network inference framework | May 14, 2023 | |
None | 4 | High-efficiency floating-point neural network inference operators for mobile, server, and Web | Jul 12, 2022 | |
C | 1190 | High-efficiency floating-point neural network inference operators for mobile, server, and Web | Aug 30, 2022 | |
Kotlin | 10 | This repo demonstrates how to use KotlinDL for neural network inference on Android devices. | Nov 26, 2022 | |
C++ | 5 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 09, 2022 | |
C++ | 15286 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 22, 2022 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 05, 2023 | |
None | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 12, 2022 | |
C | 3 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 25, 2022 | |
C++ | 89 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 08, 2022 | |
C | 11 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 20, 2023 | |
C | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Dec 07, 2022 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Feb 16, 2023 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 28, 2022 | |
C++ | 13 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Apr 28, 2022 | |
C++ | 8 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 20, 2020 | |
C++ | 3 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Oct 19, 2023 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Mar 12, 2024 | |
C++ | 2 | ncnn is a high-performance neural network inference framework optimized for the mobile platform | Aug 21, 2023 | |
Python | 4 | Modular performance benchmarking framework for neural network simulations | Sep 11, 2022 | |
C++ | 22 | Demitasse: SPMD Programing Implementation of Deep Neural Network Library for Mobile Devices(NeurIPS2016WS) | Mar 22, 2022 | |
Jupyter Notebook | 4 | Benchmarking LLM Inference Speeds | Oct 12, 2023 | |
C++ | 10 | Private and Reliable Neural Network Inference (CCS '22) | Apr 08, 2023 | |
Python | 11 | A fast minimally-intrusive neural network inference library | Apr 13, 2023 | |
Python | 29 | Simple inference codes for Neural Network (AI) models | Mar 20, 2023 | |
C | 1452 | Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators | Aug 04, 2022 | |
C | 2 | Inference Engine for Binarized Neural Networks on Resource-Constrained Devices | Jul 01, 2023 | |
C | 507 | :twisted_rightwards_arrows: Neural Network (NN) Streamer, Stream Processing Paradigm for Neural Network Apps/Devices. | Aug 20, 2022 | |
Jupyter Notebook | 7 | Membership Inference Attacks and Defenses in Neural Network Pruning | Apr 16, 2022 | |
C++ | 10 | Deep Neural Network inference using Xilinx Zynq-7000 chip. | Jul 16, 2022 | |
Python | 502 | Neural Network Compression Framework for enhanced OpenVINO™ inference | Aug 09, 2022 | |
Python | 2 | Feature Space Particle Inference for Neural Network Ensembles (ICML2022) | Jun 03, 2022 | |
C++ | 3 | MNN is a lightweight deep neural network inference engine. | May 14, 2021 | |
C++ | 10 | Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs | Apr 03, 2022 | |
C++ | 4 | Lightspeed C++ Neural Network (UE) Inference Library for Chess | Apr 14, 2023 | |
None | 2 | MNN is a lightweight deep neural network inference engine. | Oct 15, 2021 | |
Matlab | 7 | Neural Variational Inference and Learning for Sigmoid Belief Network | Mar 24, 2022 | |
CMake | 73 | The benchmark of ncnn that is a high-performance neural network inference framework optimized for the … | Apr 23, 2023 | |
C++ | 10 | Deep neural network inference transpiler tool for tflite and NNAPI | Feb 24, 2023 | |
Python | 4 | Code for paper "Real-time Neural Network Inference on Extremely Weak Devices: Agile Offloading with Explainable … | Apr 24, 2023 | |
Go | 4 | Storj Network benchmarking | Nov 08, 2021 | |
C++ | 478 | Network Benchmarking Utility | Apr 17, 2023 | |
Jupyter Notebook | 2 | Code and Presentation for the talk "Machine Learning Inference on Mobile Devices" | Feb 13, 2019 | |
C++ | 9 | GUI demo of deep neural network inference with ncnn and imgui | Jul 18, 2022 |