Statistics for topic distributed-computing
RepositoryStats tracks 595,856 Github repositories, of these 195 are tagged with the distributed-computing topic. The most common primary language for repositories using this topic is Python (63). Other languages include: Go (16), C++ (13), Rust (13), Jupyter Notebook (12)
Stargazers over time for topic distributed-computing
Most starred repositories for topic distributed-computing (view more)
Trending repositories for topic distributed-computing (view more)
Making large AI models cheaper, faster and more accessible
Distributed data engine for Python/SQL designed for the cloud, powered by Rust
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
zenoh unifies data in motion, data in-use, data at rest and computations. It carefully blends traditional pub/sub with geo-distributed storages, queries and computations, while retaining a level of ti...
Hazelcast is a unified real-time data platform combining stream processing with a fast data store, allowing customers to act instantly on data-in-motion for real-time insights.
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
Making large AI models cheaper, faster and more accessible
Distributed data engine for Python/SQL designed for the cloud, powered by Rust
Run serverless GPU workloads with fast cold starts on bare-metal servers, anywhere in the world
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
37 traditional FL (tFL) or personalized FL (pFL) algorithms, 3 scenarios, and 24 datasets. www.pfllib.com/
Super-Efficient RLHF Training of LLMs with Parameter Reallocation
Create and control multiple Julia processes remotely for distributed computing. Ships as a Julia stdlib.
Run serverless GPU workloads with fast cold starts on bare-metal servers, anywhere in the world
Making large AI models cheaper, faster and more accessible
Distributed data engine for Python/SQL designed for the cloud, powered by Rust
37 traditional FL (tFL) or personalized FL (pFL) algorithms, 3 scenarios, and 24 datasets. www.pfllib.com/
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
Run serverless GPU workloads with fast cold starts on bare-metal servers, anywhere in the world
Super-Efficient RLHF Training of LLMs with Parameter Reallocation
[NeurIPS 2024] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
Super-Efficient RLHF Training of LLMs with Parameter Reallocation
Making large AI models cheaper, faster and more accessible
Tensor parallelism is all you need. Run LLMs on an AI cluster at home using any device. Distribute the workload, divide RAM usage, and increase inference speed.
Distributed data engine for Python/SQL designed for the cloud, powered by Rust
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
37 traditional FL (tFL) or personalized FL (pFL) algorithms, 3 scenarios, and 24 datasets. www.pfllib.com/
Tensor parallelism is all you need. Run LLMs on an AI cluster at home using any device. Distribute the workload, divide RAM usage, and increase inference speed.
The default client software to create images for the AI-Horde
An understandable, fast and scalable Raft Consensus implementation