Statistics for topic pretrained-models
RepositoryStats tracks 595,858 Github repositories, of these 208 are tagged with the pretrained-models topic. The most common primary language for repositories using this topic is Python (159). Other languages include: Jupyter Notebook (25)
Stargazers over time for topic pretrained-models
Most starred repositories for topic pretrained-models (view more)
Trending repositories for topic pretrained-models (view more)
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT)...
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
The official implementation of Achieving Cross Modal Generalization with Multimodal Unified Representation (NeurIPS '23)
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
Awesome Chinese LLM: A curated list of Chinese Large Language Model 中文大语言模型数据集和模型资料汇总
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT)...
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
基于pytorch框架,针对文本分类的机器学习项目,集成多种算法(xgboost, lstm, bert, mezha等等),提供基础数据集,开箱即用,方便自己二次拓展,持续更新
The official code for "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)". TEMPO is one of the very first open source Time Series Foundation Models for fo...
Awesome Chinese LLM: A curated list of Chinese Large Language Model 中文大语言模型数据集和模型资料汇总
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT)...
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
[CVPR 2024 Extension] 160K volumes (42M slices) datasets, new segmentation datasets, 31M-1.2B pre-trained models, various pre-training recipes, 50+ downstream tasks implementation
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
The official code for "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)". TEMPO is one of the very first open source Time Series Foundation Models for fo...
Awesome Chinese LLM: A curated list of Chinese Large Language Model 中文大语言模型数据集和模型资料汇总
Chronos: Pretrained Models for Probabilistic Time Series Forecasting
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
[CVPR 2024 Extension] 160K volumes (42M slices) datasets, new segmentation datasets, 31M-1.2B pre-trained models, various pre-training recipes, 50+ downstream tasks implementation
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT)...
a state-of-the-art-level open visual language model | 多模态预训练模型
Chronos: Pretrained Models for Probabilistic Time Series Forecasting
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
GPT4V-level open-source multi-modal model based on Llama3-8B
[CVPR 2024 Extension] 160K volumes (42M slices) datasets, new segmentation datasets, 31M-1.2B pre-trained models, various pre-training recipes, 50+ downstream tasks implementation