Statistics for topic deployment
RepositoryStats tracks 595,856 Github repositories, of these 459 are tagged with the deployment topic. The most common primary language for repositories using this topic is Python (74). Other languages include: Go (58), JavaScript (50), Shell (45), TypeScript (34), PHP (26), Jupyter Notebook (19), C++ (15), Ruby (15), Java (13)
Stargazers over time for topic deployment
Most starred repositories for topic deployment (view more)
Trending repositories for topic deployment (view more)
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any...
Bare metal to production ready in mins; your own fly server on your VPS.
BMW Whatsapp Bot is your ultimate virtual companion for all things BMW. Get up-to-date information on the latest BMW models, promotions, and events directly to your WhatsApp. Stay connected with BMW i...
[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
A list of Free Software network services and web applications which can be hosted on your own servers. With repository stars⭐ and forks🍴
KubeStellar - a flexible solution for challenges associated with multi-cluster configuration management for edge, multi-cloud, and hybrid cloud
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Bare metal to production ready in mins; your own fly server on your VPS.
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any...
An opinionated framework for deploying, managing, and serving application workloads
eGenix PyRun - Your friendly, lean, open source Python runtime
[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any...
Bare metal to production ready in mins; your own fly server on your VPS.
eGenix PyRun - Your friendly, lean, open source Python runtime
Deployment Editor simplifies software packaging with PSAppDeployToolkit (PSADT). You can click your sequence for PSADT through the easy GUI.
A collection of configs, artifacts, and schemas for Hyperlane
Bare metal to production ready in mins; your own fly server on your VPS.
Bare metal to production ready in mins; your own fly server on your VPS.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any...
Bare metal to production ready in mins; your own fly server on your VPS.
[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".