Statistics for topic transformers
RepositoryStats tracks 584,796 Github repositories, of these 829 are tagged with the transformers topic. The most common primary language for repositories using this topic is Python (571). Other languages include: Jupyter Notebook (146), TypeScript (14)
Stargazers over time for topic transformers
Most starred repositories for topic transformers (view more)
Trending repositories for topic transformers (view more)
Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML models. It supports text generation, image generation, vision-language models (VLM), auto-speech-recognition (ASR), and text-to-speech ...
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
21 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/
AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. W...
Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence
Python package implementing transformers for pre processing steps for machine learning.
A Faster LayoutReader Model based on LayoutLMv3, Sort OCR bboxes to reading order.
Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML models. It supports text generation, image generation, vision-language models (VLM), auto-speech-recognition (ASR), and text-to-speech ...
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
21 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence
A minimal TensorFlow.js re-implementation of Karpathy's minGPT (Generative Pre-trained Transformer). The GPT model itself is <300 lines of code.
Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence
Implementation of the proposed Spline-Based Transformer from Disney Research
Implementation of LVSM, SOTA Large View Synthesis with Minimal 3d Inductive Bias, from Adobe Research
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML models. It supports text generation, image generation, vision-language models (VLM), auto-speech-recognition (ASR), and text-to-speech ...
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
21 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/
Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence
Implementation of LVSM, SOTA Large View Synthesis with Minimal 3d Inductive Bias, from Adobe Research
Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024
[NeurIPS 2024 Oral][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simp...
Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML models. It supports text generation, image generation, vision-language models (VLM), auto-speech-recognition (ASR), and text-to-speech ...
Chronos: Pretrained (Language) Models for Probabilistic Time Series Forecasting
Lumina-T2X is a unified framework for Text to Any Modality Generation
21 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga...
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
A MLX port of FLUX based on the Huggingface Diffusers implementation.
【三年面试五年模拟】算法工程师秘籍。涵盖AIGC、传统深度学习、自动驾驶、机器学习、计算机视觉、自然语言处理、SLAM、具身智能、元宇宙、AGI等AI行业面试笔试经验与干货知识。
From anything to mesh like human artists. Official impl. of "MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers"
streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VL
A collection of 🤗 Transformers.js demos and example applications