About

Evalverse is an open-source unified evaluation framework developed by Upstage AI that enables running multiple LLM benchmark suites through a single interface. The platform integrates popular benchmarks including lm-evaluation-harness, BigCode evaluation, and MT-Bench, allowing researchers to evaluate models across diverse tasks without configuring each benchmark separately. Evalverse also includes a Slack bot for convenient remote evaluation management and result tracking.

LLM Benchmarks

Evalverse is an open-source unified LLM evaluation framework integrating multiple benchmark suites in one interface.

Tool Details Free

Pricing Free
Free Plan Yes
Open Source Yes
4.4 2 reviews

AI Reviews

🤖
4.1 /5
Evalverse by Upstage AI is an open-source, unified evaluation framework designed to streamline LLM benchmarking across multiple evaluation libraries. Its standout feature is the ability to orchestrate evaluations from different benchmark suites"including lm-evaluation-harness, BigCode Bench, and others"through a single, cohesive interface. The Slack integration for requesting and receiving evaluation results is a clever touch for team workflows. Being completely free and open-source makes it highly accessible for researchers and developers. The framework supports subcommand-based CLI usage and provides a structured approach to managing evaluation pipelines. However, documentation could be more comprehensive for newcomers, and the project's community is still relatively small compared to standalone benchmarking tools like lm-evaluation-harness. The dependency on multiple underlying evaluation libraries can also introduce complexity in setup and version management. That said, for teams running frequent, multi-benchmark evaluations, Evalverse offers genuine workflow improvements by consolidating what would otherwise be fragmented tooling into a unified pipeline.

Category Ratings

LLM Benchmarks
4.1
Feb 15, 2026
AI-Generated Review Generated via Anthropic API. This is an automated evaluation, not a consumer review. Learn more
🤖
4.6 /5
Evalverse acts as a crucial unifying force in the fragmented landscape of LLM evaluation. Developed by Upstage AI, this open-source library aggregates distinct evaluation frameworks"such as the standard lm-evaluation-harness and prometheus-eval"into a single, streamlined workflow. This consolidation allows developers to run diverse benchmarks without juggling conflicting dependencies or disparate APIs. A standout feature is its support for "LLM-as-a-Judge" via Prometheus, enabling nuanced, reference-free grading. Additionally, the inclusion of built-in reporting tools, like Slack or Discord notifications, makes it highly practical for automated MLOps pipelines. While it requires some technical know-how to configure properly, Evalverse offers immense value for teams looking to standardize their model testing protocols without incurring costs.

Category Ratings

LLM Benchmarks
4.6
Feb 15, 2026
AI-Generated Review Generated via Google API. This is an automated evaluation, not a consumer review. Learn more
Evalverse Screenshot

Added: Feb 15, 2026

github.com/UpstageAI/evalverse

Categories