April 21, 2026

The You.com Web Search Eval Harness: Benchmark Any Web Search Provider Yourself

Eddy Nassif

Senior Applied Scientist

Blue graphic showing text: You.com Web Search Eval Harness: Benchmark Any Web Search Provider Yourself, with simple decorative shapes in the corners too
Share
  1. LI Test

  2. LI Test

AI systems grounded in web search are only as reliable as the search layer underneath them. When that layer retrieves inaccurate, stale, or incomplete information, the model compounds the error.

We have seen this play out repeatedly: developers and enterprise teams spend weeks configuring eval harnesses, only to discover their setup is inconsistent with how the original benchmarks were designed to be scored. Others accept vendor-published numbers at face value and make architecture decisions on claims they cannot reproduce. Either path leads to the same place: AI use cases that underperform in production, and teams that cannot pinpoint why.

The You.com Web Search Eval Harness is our attempt to remove that obstacle. It is an open-source evaluation framework that runs the four benchmarks that have emerged as the industry standard for measuring search accuracy in AI workflows: SimpleQA, FRAMES, BrowseComp, and DeepSearchQA. It works out of the box, produces results you can compare across providers, and is designed to be extended. 

Our goal is to give any developer or enterprise team a reliable starting point for vendor evaluation, so they can spend time on the AI use cases that matter rather than on eval infrastructure.

The Included Benchmarks

Not all search queries are alike, and no single benchmark covers the full spectrum from direct factual retrieval to deep, multi-source research. The You.com Web Search Eval Harness includes four benchmarks that, together, cover all of the bases.:

  • SimpleQA (OpenAI) measures factual accuracy on direct questions with unambiguous answers. It is designed to be hard to game: questions are calibrated so that a model without access to current web information cannot reliably answer them. Results reflect whether the search layer is retrieving the right facts, not just relevant documents.
  • FRAMES (Google) tests accuracy on questions that require synthesizing structured information across a broader set of documents, covering the kind of multi-hop reasoning tasks common in enterprise research workflows.
  • BrowseComp (OpenAI) evaluates a system's ability to answer questions that require navigating across multiple sources, where no single page contains the complete answer. It measures the full retrieval and synthesis pipeline under conditions that resemble how research agents operate in practice.
  • DeepSearchQA (Google DeepMind) evaluates agents on complex, open-ended research tasks where correctness is measured by completeness and precision across an exhaustive answer set, not just the first correct result. It is the closest proxy for how search performs under production research workloads.

If there is a benchmark your team relies on that isn’t included,  open an issue on GitHub—we’re all ears.

How the You.com Web Search Eval Harness Works

The Web Search Eval harness handles dataset loading, query execution, answer synthesis, and scoring in a consistent pipeline across benchmarks. Scoring follows the methodology defined by the original benchmark authors. Where an original scoring model was deprecated, we selected the closest available replacement to preserve grading consistency.

The framework uses OpenAI's gpt-5.4-mini for grading and gpt-5.4-nano for synthesis, regardless of which search endpoints you are testing. Consistent grading requires a fixed, third-party model that is not tied to any vendor being evaluated. The eval harness is open source, so you can inspect or adjust these choices.

Clone the repo, add your API keys, and you're running evaluations in minutes.

Code Example

  python src/evals/eval_runner.py --samplers
  you_search_with_livecrawl --datasets simpleqa

To run all default providers and datasets, drop the flags. Results are written to CSV in batches during execution, so longer runs are monitorable. If a run is interrupted, it resumes without reprocessing completed queries. Full setup instructions and configuration options are in the README.

The Results

When a run completes, analyzed_results.csv contains accuracy and latency for every provider and benchmark combination you tested.

Every vendor, including You.com, continuously updates their search endpoints, so results shift over time. For the latest numbers, see the results table on GitHub. The framework exists so you don't have to take anyone's snapshot on faith. When you want to know where things stand, run it yourself.

Getting Started

The repository is at github.com/youdotcom-oss/web-search-api-evals. The README covers setup, configuration, and how to run the evaluations.

You will need a You.com API key and an OpenAI API key to run the default configuration. The OpenAI key is used for the grading and synthesis models. 

If you want to understand more about how we think about search accuracy, evaluation methodology, and what good performance looks like in production AI workflows, we have written about it here: 

We want to hear from you. If you hit a configuration issue, have questions about your eval setup, want to request a benchmark, or just want to talk through how to evaluate search providers for your use case, start a conversation in GitHub Discussions

For enterprise or private inquiries, reach out directly at [email protected]. We read it.

Featured resources.

All resources.

Browse our complete collection of tools, guides, and expert insights — helping your team turn AI into ROI.

Clear petri dishes, a small vial, and a glass molecular model arranged on a bright blue surface with soft shadows for a clean scientific look.
Comparisons, Evals & Alternatives

Extreme Single-Agent Inference Scaling for Agentic Search: Achieving SOTA on DeepSearchQA

Abel Lim

Senior Research Engineer

April 20, 2026

Blog

Graphic with purple background showing title about AI governance and web search APIs, with geometric line shapes arranged below the headline.
AI Search Infrastructure

The AI Governance Problem: Why Web Search APIs Are the Missing Layer

You.com Team

April 20, 2026

Blog

Accuracy, Latency, & Cost

Guide: Why API Latency Alone Is a Misleading Metric

Brooke Grief

Head of Content

April 15, 2026

Guides

A person wearing glasses and a headset mic speaks onstage during a panel discussion, seated on a chair against a teal backdrop displaying the event’s logo.
AI Agents & Custom Indexes

Governing AI Isn't Optional Anymore—and the Fix Starts at the Infrastructure Layer

Julia La Roche

Head of PR & Communications

April 14, 2026

News & Press

Purple graphic with geometric lines and squares displaying the text “Best Web Search APIs for AI Agents: What to Test Before You Commit.”
Comparisons, Evals & Alternatives

Best Web Search APIs for AI Agents: What to Test Before You Commit

Brian Sparker

Staff Product Manager

April 13, 2026

Blog

A blue-tinted composite of a city skyline overlaid with financial charts, bar graphs, and data numbers on a purple gradient background.
AI Agents & Custom Indexes

Building a Recursive Agent-Improvement Pipeline

Patrick Donohoe

AI Engineer

April 9, 2026

Blog

A surreal spiral clock with Roman numerals recurses infinitely inward against a blue gradient background with floating geometric squares.
Accuracy, Latency, & Cost

Why API Latency Alone Is a Misleading Metric

Brooke Grief

Head of Content

April 7, 2026

Blog

A person with short wavy hair stands smiling with arms crossed against a wooden fence, wearing a blue shirt and a smartwatch.
Company

What Does It Actually Take to Build AI That Works? Richard Socher Has Some Answers

You.com Team

April 2, 2026

News & Press