Statistics for topic onnx
RepositoryStats tracks 579,129 Github repositories, of these 354 are tagged with the onnx topic. The most common primary language for repositories using this topic is Python (196). Other languages include: C++ (57), Jupyter Notebook (21), Rust (15), C# (13)
Stargazers over time for topic onnx
Most starred repositories for topic onnx (view more)
Trending repositories for topic onnx (view more)
Visualizer for neural network, deep learning and machine learning models
Speech-to-text, text-to-speech, speaker diarization, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 ser...
ncnn is a high-performance neural network inference framework optimized for the mobile platform
A utility to inspect, validate, sign and verify machine learning model files.
👀 Apply YOLOv8 exported with ONNX or TensorRT(FP16, INT8) to the Real-time camera
LightGlue-OnnxRunner is a repository hosts the C++ inference code of LightGlue in ONNX format,supporting end-to-end/decouple model inference of SuperPoint/DISK + LightGlue
MixTeX multimodal LaTeX, ZhEn, and, Table OCR. It performs efficient CPU-based inference in a local offline on Windows.
Visualizer for neural network, deep learning and machine learning models
Speech-to-text, text-to-speech, speaker diarization, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 ser...
ncnn is a high-performance neural network inference framework optimized for the mobile platform
A utility to inspect, validate, sign and verify machine learning model files.
👀 Apply YOLOv8 exported with ONNX or TensorRT(FP16, INT8) to the Real-time camera
LightGlue-OnnxRunner is a repository hosts the C++ inference code of LightGlue in ONNX format,supporting end-to-end/decouple model inference of SuperPoint/DISK + LightGlue
MixTeX multimodal LaTeX, ZhEn, and, Table OCR. It performs efficient CPU-based inference in a local offline on Windows.
A utility to inspect, validate, sign and verify machine learning model files.
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Visualizer for neural network, deep learning and machine learning models
Speech-to-text, text-to-speech, speaker diarization, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 ser...
YoloDotNet - A C# .NET 8.0 project for Classification, Object Detection, OBB Detection, Segmentation and Pose Estimation in both images and videos.
The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference
Only implemented through torch: "bi - mamba2" , "vision- mamba2 -torch". support 1d/2d/3d/nd and support export by jit.script/onnx;
MixTeX multimodal LaTeX, ZhEn, and, Table OCR. It performs efficient CPU-based inference in a local offline on Windows.
MixTeX multimodal LaTeX, ZhEn, and, Table OCR. It performs efficient CPU-based inference in a local offline on Windows.
🚀 你的YOLO部署神器。TensorRT Plugin、CUDA Kernel、CUDA Graphs三管齐下,享受闪电般的推理速度。| Your YOLO Deployment Powerhouse. With the synergy of TensorRT Plugins, CUDA Kernels, and CUDA Graphs, experience lightning-fast i...
AI Productivity Tool - Free and open-source, enhancing user productivity while ensuring privacy and data security. It provides efficient and convenient AI solutions, including but not limited to: buil...
The Qualcomm® AI Hub Models are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
A collection of sample programs, notebooks, and tools which highlight the power of the MAX Platform
Open source real-time translation app for Android that runs locally
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
A collection of sample programs, notebooks, and tools which highlight the power of the MAX Platform
Open source real-time translation app for Android that runs locally
ONNX-compatible Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
模型部署白皮书(CUDA|ONNX|TensorRT|C++)🚀🚀🚀