10 results found Sort:

中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
Created 2024-01-16
17 commits to main branch, last one 2 months ago
🤖️ an AI chat Telegram bot can Web Search Powered by GPT-3.5/4/4 Turbo/4o, DALL·E 3, Groq, Gemini 1.5 Pro/Flash and the official Claude2.1/3/3.5 API using Python on Zeabur, fly.io and Replit.
Created 2022-12-06
778 commits to main branch, last one 15 hours ago
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
Created 2023-12-06
92 commits to main branch, last one 3 months ago
21
255
apache-2.0
8
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
Created 2023-12-18
83 commits to main branch, last one about a month ago
16
151
apache-2.0
8
Fast Inference of MoE Models with CPU-GPU Orchestration
Created 2024-02-05
49 commits to main branch, last one 2 months ago
Build LLM-powered robots in your garage with MachinaScript For Robots!
Created 2024-01-31
39 commits to main branch, last one 2 months ago
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Created 2023-12-08
22 commits to master branch, last one 4 months ago
8
88
apache-2.0
1
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
Created 2023-06-24
27 commits to master branch, last one 4 months ago
An innovative Python project that integrates AI-driven agents for Agile software development, leveraging advanced language models and collaborative task automation.
Created 2024-02-11
2 commits to main branch, last one 2 months ago
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Created 2024-01-12
15 commits to main branch, last one 5 months ago