2 Commits

Author SHA1 Message Date
corby
9f14b6e340 Universal Verifier (MMRubricAgent) + WebTailBench, autogen-free clients
Three changes in one PR:

1. Remove webeval's dependency on ``autogen-core`` / ``autogen-ext``.
   All chat completion clients, message types, and the graceful-retry
   layer now live under ``webeval/src/webeval/oai_clients/`` —
   self-contained wrappers around openai / azure-identity. Install no
   longer needs the autogen submodule; just ``pip install -e .[vllm]``
   then ``cd webeval; pip install -e .``.

2. Incorporate the initial (now-stale) WebTailBench benchmark into the
   codebase. ``webeval/src/webeval/benchmarks/webtailbench/`` +
   ``webeval/scripts/webtailbench.py``. Loader auto-downloads
   ``WebTailBench-v1-rubrics.tsv`` from
   ``huggingface.co/datasets/microsoft/WebTailBench`` and threads each
   task's published ``precomputed_rubric`` through to the verifier so
   rubrics never get regenerated.

3. Release the Universal Verifier (``MMRubricAgent``) as the official
   judge for WebTailBench. Multimodal, rubric-grounded, two-model
   ensemble (``gpt-5.2`` + ``o4-mini``) with per-criterion scoring,
   outcome verification, ambiguity / invalid-task classification, and
   first-point-of-failure analysis. ``webeval/scripts/verify_trajectories.py``
   is a stand-alone parallel runner that re-scores any directory of
   webeval-shaped trajectories without touching the solver.

Documentation: repo-root README ``Updates`` section + Reproducibility
CLI block; ``webeval/README.md`` documents the Trajectory / FinalAnswer
schema, the ``<no_answer>`` semantics, and per-benchmark score-file
shape.

Tests: 18 passing, 1 skipped (opt-in HF download).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 17:15:46 -07:00
ataymano@microsoft.com
ff0dbac1d1 Initial commit 2025-11-24 10:32:01 -05:00