19 results found Sort:

137
1.4k
other
22
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Created 2023-07-31
5,134 commits to main branch, last one a day ago
The simplest way to serve AI/ML models in production
Created 2022-07-06
1,298 commits to main branch, last one a day ago
37
732
apache-2.0
6
An open-source computer vision framework to build and deploy apps in minutes
Created 2023-07-21
664 commits to main branch, last one 7 months ago
87
564
apache-2.0
41
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
Created 2022-04-04
55 commits to main branch, last one 2 years ago
The goal of RamaLama is to make working with AI boring.
Created 2024-07-24
977 commits to main branch, last one 2 days ago
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
Created 2019-12-11
226 commits to master branch, last one 2 years ago
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Created 2019-12-11
158 commits to master branch, last one 2 years ago
Work with LLMs on a local environment using containers
Created 2023-12-19
1,408 commits to main branch, last one 2 days ago
This is a repository for an object detection inference API using the Tensorflow framework.
Created 2019-12-11
415 commits to master branch, last one 2 years ago
31
150
apache-2.0
5
Serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints
Created 2020-04-16
48 commits to master branch, last one 2 months ago
Orkhon: ML Inference Framework and Server Runtime
Created 2019-05-18
102 commits to master branch, last one 3 years ago
ONNX Runtime Server: The ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference.
Created 2023-09-04
132 commits to main branch, last one 27 days ago
10
101
apache-2.0
9
K3ai is a lightweight, fully automated, AI infrastructure-in-a-box solution that allows anyone to experiment quickly with Kubeflow pipelines. K3ai is perfect for anything from Edge to laptops.
This repository has been archived (exclude archived)
Created 2020-09-30
90 commits to master branch, last one 3 years ago
Deploy DL/ ML inference pipelines with minimal extra code.
Created 2020-04-09
503 commits to master branch, last one about a month ago
A standalone inference server for trained Rubix ML estimators.
Created 2019-01-02
282 commits to master branch, last one 10 months ago
Friendli: the fastest serving engine for generative AI
Created 2023-07-20
65 commits to main branch, last one 2 months ago
Wingman is the fastest and easiest way to run Llama models on your PC or Mac.
Created 2023-10-05
360 commits to develop branch, last one 6 months ago
Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)
Created 2023-06-12
17 commits to main branch, last one about a year ago