24 results found Sort:
- Filter by Primary Language:
- Python (15)
- Jupyter Notebook (2)
- HTML (1)
- JavaScript (1)
- +
Loki: Open-source solution designed to automate the process of verifying factuality
Created
2024-03-25
48 commits to main branch, last one 5 months ago
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
Created
2023-03-20
156 commits to main branch, last one 11 days ago
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models
Created
2023-09-26
107 commits to main branch, last one 2 months ago
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
Created
2023-12-04
102 commits to main branch, last one 4 months ago
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Created
2023-06-15
366 commits to main branch, last one 12 months ago
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Created
2023-10-22
136 commits to main branch, last one 3 months ago
Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.
Created
2024-06-01
36 commits to master branch, last one 3 months ago
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
Created
2023-11-06
167 commits to main branch, last one 3 months ago
😎 curated list of awesome LMM hallucinations papers, methods & resources.
This repository has been archived
(exclude archived)
Created
2023-10-11
57 commits to main branch, last one 11 months ago
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Created
2024-02-27
18 commits to main branch, last one 11 months ago
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
Created
2024-01-16
48 commits to main branch, last one 19 days ago
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
llm
mlm
lvlm
mllm
hallucination
hallucination-survey
large-language-models
hallucination-research
vision-language-models
hallucination-benchmark
hallucination-detection
hallucination-evaluation
hallucination-mitigation
multimodal-language-model
large-vision-language-models
multimodal-large-language-models
Created
2024-03-15
54 commits to master branch, last one 16 days ago
This repository has no description...
Created
2023-12-14
32 commits to main branch, last one 6 months ago
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
Created
2023-04-05
22 commits to main branch, last one 10 months ago
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
Created
2024-01-23
8 commits to main branch, last one 17 days ago
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
Created
2023-12-23
10 commits to main branch, last one about a year ago
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the availab...
Created
2024-05-11
30 commits to main branch, last one 8 months ago
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
Created
2024-02-17
205 commits to main branch, last one 11 months ago
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Created
2024-09-29
17 commits to main branch, last one 3 months ago
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
Created
2024-05-12
84 commits to main branch, last one 6 months ago
[CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs
Created
2024-06-12
11 commits to main branch, last one 9 months ago
A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
Created
2025-02-18
12 commits to main branch, last one 19 days ago
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
Created
2024-04-20
41 commits to main branch, last one 13 days ago
Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety a...
Created
2024-12-18
78 commits to main branch, last one 4 days ago