36 results found Sort:
- Filter by Primary Language:
- Python (29)
- HTML (1)
- Jupyter Notebook (1)
- +
✨✨VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction
Created
2024-08-10
127 commits to main branch, last one 20 hours ago
Open Source Generative Process Automation (i.e. Generative RPA). AI-First Process Automation with Large ([Language (LLMs) / Action (LAMs) / Multimodal (LMMs)] / Visual Language (VLMs)) Models
Created
2023-04-12
937 commits to main branch, last one about a month ago
[NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions
Created
2024-06-06
44 commits to master branch, last one 4 months ago
A Framework of Small-scale Large Multimodal Models
Created
2024-02-21
223 commits to main branch, last one 16 days ago
LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills
Created
2023-11-07
404 commits to main branch, last one about a year ago
A collection of resources on applications of multi-modal learning in medical imaging.
Created
2022-07-13
156 commits to main branch, last one 2 days ago
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI"
Created
2023-11-23
144 commits to main branch, last one about a month ago
An open-source implementation for training LLaVA-NeXT.
Created
2024-05-11
36 commits to master branch, last one 3 months ago
LLaVA-Mini is a unified large multimodal model (LMM) that can support the understanding of images, high-resolution images, and videos in an efficient manner.
Created
2025-01-07
8 commits to main branch, last one about a month ago
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
Created
2023-11-30
50 commits to main branch, last one 5 months ago
Open Platform for Embodied Agents
Created
2024-03-13
129 commits to main branch, last one about a month ago
A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision, llama-3.2-vision, qwen-vl, qwen2-vl, phi3-v etc.
Created
2024-07-20
109 commits to main branch, last one 11 days ago
The official evaluation suite and dynamic data release for MixEval.
Created
2024-06-01
120 commits to main branch, last one 3 months ago
[ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions
Created
2024-06-06
3 commits to master branch, last one 7 months ago
Embed arbitrary modalities (images, audio, documents, etc) into large language models.
Created
2023-10-11
84 commits to main branch, last one 10 months ago
[NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"
Created
2024-03-29
19 commits to main branch, last one 4 months ago
A curated list of awesome Multimodal studies.
Created
2024-04-05
66 commits to main branch, last one 3 days ago
The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate".
Created
2024-10-09
17 commits to main branch, last one 2 months ago
[ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models
Created
2023-11-20
93 commits to main branch, last one 5 months ago
[ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models
Created
2024-09-04
14 commits to master branch, last one 4 months ago
[ICLR 2025] Reconstructive Visual Instruction Tuning
Created
2024-10-11
6 commits to master branch, last one 22 days ago
GeoPixel: A Pixel Grounding Large Multimodal Model for Remote Sensing is specifically developed for high-resolution remote sensing image analysis, offering advanced multi-target pixel grounding capabi...
Created
2025-01-23
6 commits to main branch, last one 20 days ago
Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"
Created
2024-04-02
26 commits to main branch, last one 3 months ago
(ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents
Created
2024-07-26
16 commits to main branch, last one 13 days ago
The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.
Created
2024-04-15
20 commits to main branch, last one 4 months ago
LMM which strictly superset LLM embedded
Created
2024-08-23
5 commits to main branch, last one 3 months ago
[NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"
Created
2024-04-17
71 commits to master branch, last one 3 months ago
A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo
Created
2024-06-12
11 commits to main branch, last one 7 months ago
[NACCL 2025 🔥] CAMEL-Bench is an Arabic benchmark for evaluating multimodal models across eight domains with 29,000 questions.
Created
2024-10-23
23 commits to main branch, last one 21 days ago
This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"
llm
llms
benchmark
evaluation
multimodal
deep-learning
multimodality
computer-vision
machine-learning
foundation-models
deep-neural-networks
large-language-models
long-context-modeling
large-multimodal-models
long-context-transformers
visual-question-answering
natural-language-processing
multimodal-large-language-models
Created
2024-04-12
17 commits to main branch, last one 7 months ago