24 results found Sort:

Loki: Open-source solution designed to automate the process of verifying factuality
Created 2024-03-25
48 commits to main branch, last one 5 months ago
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
Created 2023-03-20
156 commits to main branch, last one 11 days ago
31
631
unknown
16
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models
Created 2023-09-26
107 commits to main branch, last one 2 months ago
38
351
apache-2.0
10
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
Created 2023-12-04
102 commits to main branch, last one 4 months ago
13
272
bsd-3-clause
12
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Created 2023-06-15
366 commits to main branch, last one 12 months ago
8
270
bsd-3-clause
5
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Created 2023-10-22
136 commits to main branch, last one 3 months ago
Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.
Created 2024-06-01
36 commits to master branch, last one 3 months ago
17
160
apache-2.0
10
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
Created 2023-11-06
167 commits to main branch, last one 3 months ago
😎 curated list of awesome LMM hallucinations papers, methods & resources.
This repository has been archived (exclude archived)
Created 2023-10-11
57 commits to main branch, last one 11 months ago
6
144
gpl-3.0
4
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Created 2024-02-27
18 commits to main branch, last one 11 months ago
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
Created 2024-01-16
48 commits to main branch, last one 19 days ago
This repository has no description...
Created 2023-12-14
32 commits to main branch, last one 6 months ago
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
Created 2023-04-05
22 commits to main branch, last one 10 months ago
2
75
apache-2.0
1
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
Created 2024-01-23
8 commits to main branch, last one 17 days ago
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
Created 2023-12-23
10 commits to main branch, last one about a year ago
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the availab...
Created 2024-05-11
30 commits to main branch, last one 8 months ago
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
Created 2024-02-17
205 commits to main branch, last one 11 months ago
2
40
mit
4
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Created 2024-09-29
17 commits to main branch, last one 3 months ago
4
38
unknown
2
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
Created 2024-05-12
84 commits to main branch, last one 6 months ago
[CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs
Created 2024-06-12
11 commits to main branch, last one 9 months ago
0
33
apache-2.0
0
A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
Created 2025-02-18
12 commits to main branch, last one 19 days ago
Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety a...
Created 2024-12-18
78 commits to main branch, last one 4 days ago