19 results found Sort:
- Filter by Primary Language:
- Python (13)
- HTML (1)
- Jupyter Notebook (1)
- +
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
Created
2023-10-31
263 commits to main branch, last one 3 days ago
List of papers on hallucination detection in LLMs.
Created
2023-09-15
147 commits to main branch, last one a day ago
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
Created
2023-09-26
106 commits to main branch, last one 6 months ago
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
Created
2023-08-21
28 commits to main branch, last one 8 months ago
A curated list of trustworthy deep learning papers. Daily updating...
privacy
backdoor
fairness
green-ai
security
causality
ownership
poisoning
robustness
uncertainty
ai-alignment
watermarking
deep-learning
hallucinations
gradient-leakage
machine-unlearning
interpretable-deep-learning
membership-inference-attack
adversarial-machine-learning
out-of-distribution-generalization
Created
2020-07-19
646 commits to master branch, last one a day ago
Alignment toolkit to safeguards LLMs
Created
2023-04-13
119 commits to main branch, last one 8 days ago
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
Created
2023-11-06
167 commits to main branch, last one about a month ago
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
Created
2024-02-27
18 commits to main branch, last one 9 months ago
Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"
Created
2024-07-08
7 commits to main branch, last one 4 months ago
Attack to induce LLMs within hallucinations
Created
2023-09-29
22 commits to master branch, last one about a year ago
Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.
Created
2023-11-15
7 commits to main branch, last one 3 months ago
mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating
Created
2024-01-18
14 commits to main branch, last one 10 months ago
[ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"
Created
2023-11-03
139 commits to OH_zoo branch, last one 17 days ago
Hallucinations (Confabulations) Document-Based Benchmark for RAG
Created
2024-10-10
36 commits to master branch, last one about a month ago
An Easy-to-use Hallucination Detection Framework for LLMs.
Created
2023-12-31
56 commits to main branch, last one 8 months ago
Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"
Created
2023-09-11
9 commits to main branch, last one about a year ago
Framework for testing vulnerabilities of large language models (LLM).
Created
2024-09-05
4 commits to release branch, last one 5 days ago
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
Created
2023-10-24
28 commits to main branch, last one 6 months ago
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
Created
2023-10-12
12 commits to main branch, last one 11 months ago