7 results found Sort:

250
2.0k
mit
24
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM Observability all in one place.
Created 2023-04-26
11,084 commits to main branch, last one 3 days ago
Evaluate your LLM's response with Prometheus and GPT4 💯
Created 2024-04-18
205 commits to main branch, last one 23 days ago
🤠 Agent-as-a-Judge and DevAI dataset
Created 2024-10-16
20 commits to main branch, last one 3 months ago
[ICLR 2025] xFinder: Large Language Models as Automated Evaluators for Reliable Evaluation
Created 2024-05-19
40 commits to main branch, last one 7 days ago
CodeUltraFeedback: aligning large language models to coding preferences
Created 2024-01-25
51 commits to main branch, last one 7 months ago
This is the repo for the survey of Bias and Fairness in IR with LLMs.
Created 2024-03-18
49 commits to main branch, last one 3 months ago
Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"
Created 2024-06-11
29 commits to main branch, last one 2 months ago