1 result found Sort:
Tensor parallelism is all you need. Run LLMs on an AI cluster at home using any device. Distribute the workload, divide RAM usage, and increase inference speed.
Created
2023-12-04
292 commits to main branch, last one 3 days ago