22 results found Sort:
- Filter by Primary Language:
- Python (14)
- Jupyter Notebook (2)
- HTML (1)
- JavaScript (1)
- +
Loki: Open-source solution designed to automate the process of verifying factuality
Created
2024-03-25
48 commits to main branch, last one 4 months ago
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
Created
2023-03-20
150 commits to main branch, last one 7 months ago
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs
Created
2023-09-26
107 commits to main branch, last one about a month ago
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
Created
2023-12-04
102 commits to main branch, last one 2 months ago
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Created
2023-10-22
136 commits to main branch, last one 2 months ago
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Created
2023-06-15
366 commits to main branch, last one 10 months ago
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
Created
2023-11-06
167 commits to main branch, last one 2 months ago
Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.
Created
2024-06-01
36 commits to master branch, last one about a month ago
😎 curated list of awesome LMM hallucinations papers, methods & resources.
This repository has been archived
(exclude archived)
Created
2023-10-11
57 commits to main branch, last one 10 months ago
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Created
2024-02-27
18 commits to main branch, last one 10 months ago
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
Created
2024-01-16
46 commits to main branch, last one about a month ago
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
Created
2023-04-05
22 commits to main branch, last one 9 months ago
This repository has no description...
Created
2023-12-14
32 commits to main branch, last one 4 months ago
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
llm
mlm
lvlm
mllm
hallucination
hallucination-survey
large-language-models
hallucination-research
vision-language-models
hallucination-benchmark
hallucination-detection
hallucination-evaluation
hallucination-mitigation
multimodal-language-model
large-vision-language-models
multimodal-large-language-models
Created
2024-03-15
52 commits to master branch, last one 5 days ago
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
Created
2024-01-23
7 commits to main branch, last one 10 months ago
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
Created
2023-12-23
10 commits to main branch, last one 11 months ago
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the availab...
Created
2024-05-11
30 commits to main branch, last one 6 months ago
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
Created
2024-02-17
205 commits to main branch, last one 10 months ago
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
Created
2024-05-12
84 commits to main branch, last one 4 months ago
Official Implementation of 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs
Created
2024-06-12
11 commits to main branch, last one 7 months ago
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Created
2024-09-29
17 commits to main branch, last one about a month ago
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
Created
2024-04-20
40 commits to main branch, last one 4 months ago