7 results found Sort:

288
2.4k
mit
25
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
Created 2023-04-26
12,067 commits to main branch, last one 2 days ago
Evaluate your LLM's response with Prometheus and GPT4 💯
Created 2024-04-18
209 commits to main branch, last one 26 days ago
🤠 Agent-as-a-Judge and DevAI dataset
Created 2024-10-16
20 commits to main branch, last one 5 months ago
[ICLR 2025] xFinder: Large Language Models as Automated Evaluators for Reliable Evaluation
Created 2024-05-19
41 commits to main branch, last one about a month ago
CodeUltraFeedback: aligning large language models to coding preferences
Created 2024-01-25
51 commits to main branch, last one 9 months ago
This is the repo for the survey of Bias and Fairness in IR with LLMs.
Created 2024-03-18
55 commits to main branch, last one 8 days ago
Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"
Created 2024-06-11
32 commits to main branch, last one about a month ago