Statistics for topic distributed-computing
RepositoryStats tracks 518,991 Github repositories, of these 175 are tagged with the distributed-computing topic. The most common primary language for repositories using this topic is Python (57). Other languages include: Go (15), C++ (11), Jupyter Notebook (11), Rust (11)
Stargazers over time for topic distributed-computing
Most starred repositories for topic distributed-computing (view more)
Trending repositories for topic distributed-computing (view more)
Hazelcast is a unified real-time data platform combining stream processing with a fast data store, allowing customers to act instantly on data-in-motion for real-time insights.
Making large AI models cheaper, faster and more accessible
Small scale distributed training of sequential deep learning models, built on Numpy and MPI.
Distributed DataFrame for Python designed for the cloud, powered by Rust
Small scale distributed training of sequential deep learning models, built on Numpy and MPI.
Modern web-based distributed hashcracking solution, built on hashcat
A Web Crawler based on LLMs implemented with Ray and Huggingface. The embeddings are saved into a vector database for fast clustering and retrieval
Distributed DataFrame for Python designed for the cloud, powered by Rust
Making large AI models cheaper, faster and more accessible
Hazelcast is a unified real-time data platform combining stream processing with a fast data store, allowing customers to act instantly on data-in-motion for real-time insights.
Small scale distributed training of sequential deep learning models, built on Numpy and MPI.
Modern web-based distributed hashcracking solution, built on hashcat
A Web Crawler based on LLMs implemented with Ray and Huggingface. The embeddings are saved into a vector database for fast clustering and retrieval
Making large AI models cheaper, faster and more accessible
Distributed DataFrame for Python designed for the cloud, powered by Rust
Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
Modern web-based distributed hashcracking solution, built on hashcat
Small scale distributed training of sequential deep learning models, built on Numpy and MPI.
A Web Crawler based on LLMs implemented with Ray and Huggingface. The embeddings are saved into a vector database for fast clustering and retrieval
Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
IDDM (Industrial, landscape, animate...), support DDPM, DDIM, PLMS, webui and multi-GPU distributed training. Pytorch实现,生成模型,扩散模型,分布式训练
Automated Parallelization System and Infrastructure for Multiple Ecosystems
Making large AI models cheaper, faster and more accessible
Distributed DataFrame for Python designed for the cloud, powered by Rust
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
We expose this user-friendly algorithm library (with an integrated evaluation platform) for beginners who intend to start federated learning (FL) study
Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
Chains stable-diffusion-webui instances together to facilitate faster image generation.
Modern web-based distributed hashcracking solution, built on hashcat
Automated Parallelization System and Infrastructure for Multiple Ecosystems