DDR-Bench - Deep Data Research

Hunt Instead of Wait: Evaluating Deep Data Research on Large Language Models

We distinguish investigatory intelligence (autonomously setting goals and exploring) from executional intelligence (completing assigned tasks), arguing that true agency requires the former. To evaluate this, we introduce Deep Data Research (DDR), an open-ended task where LLMs autonomously extract insights from databases, and DDR-Bench, a large-scale, checklist-based benchmark enabling verifiable evaluation. Results show that while frontier models display emerging agency, long-horizon exploration remains challenging, with effective investigatory intelligence depending on intrinsic agentic strategies beyond mere scaffolding or scaling.

Framework Overview

Overview of DDR-Bench.

Task Formulation Framework

Task Formulation

A case of Claude Sonnet 4.5's trajectory and evaluation checklist in the MIMIC scenario of DDR-Bench. Verified fact and supporting insights are underlined. The agent is asked to perform multiple ReAct turns to explore the database without predefined targets or queries, autonomously mine insights from the exploration.

Evaluation Pipeline Framework

Evaluation Pipeline

Left: Compared with previous tasks, DDR maximises exploration openness and agency, focusing on the direct evaluation of insight quality. Right: Overview of the DDR-Bench. The checklist derived from the freeform parts of the database is used to evaluate the agent generated insights from the exploration on the structured parts of the database.

Agent Trajectory

Observe the autonomous decision-making process of the agent across different scenarios.

Exploring clinical patterns and patient outcomes in a large-scale electronic health record (EHR) database.

Loading trajectory data...
Scroll to see more

Benchmarking

Overall average accuracy across all scenarios and evaluation metrics.
Purple = Proprietary Green = Open-source

Claude 4.5 Sonnet achieves the highest overall average accuracy at 47.73%, significantly outperforming other models. Among open-source models, DeepSeek-V3.2 leads with 38.80%, followed closely by GLM-4.6 (37.52%) and Kimi K2 (36.42%). The results demonstrate a clear performance gap between frontier proprietary models and open-source alternatives, though top open-source models remain competitive with mid-tier proprietary offerings.

Experiments

Main benchmark results and in-depth analysis of agent capabilities.

Scaling Analysis

Explore how model performance scales with interaction turns, token usage, and inference cost.

MIMIC

10-K

GLOBEM

LLMs extract more accurate insights from delaying commitment, and they concentrate reasoning into a small number of highly valuable late-stage interactions. These targeted interactions are built upon longer early exploration.

Novelty vs Accuracy

Novelty (Bradley-Terry) vs Accuracy ranking
● = Novelty, ◇ = Accuracy.
Purple = Proprietary Green = Open-source

MIMIC

10-K

GLOBEM

The ranking induced by novel insight usefulness closely aligns with the ranking based on checklist accuracy. Differences between the two rankings are small, especially among the top-performing models.

Turn Distribution

Analyze the distribution of interaction turns across different models and datasets.

MIMIC

10-K

GLOBEM

Stronger models tend to explore for more rounds without external prompting. Knowledge-intensive databases such as 10-K and MIMIC induce more interaction rounds than signal-based datasets such as GLOBEM, and the resulting distributions are also more uniform.

Exploration Pattern

Scatter plot showing Access Entropy vs Coverage by model. Opacity represents accuracy. Higher entropy = more uniform access; Higher coverage = more fields explored.

GPT-5.2

Claude-4.5-Sonnet

Gemini-3-Flash

GLM-4.6

Qwen3-Next-80B-A3B

DeepSeek-V3.2

Advanced LLMs tend to operate in a balanced exploration regime that combines adequate coverage with focused access. Such a regime is consistently observed across different scenarios.

Error Analysis

Breakdown of error types encountered during agent interactions, grouped by main categories.

Our findings revealed that 58% of errors stemmed from insufficient exploration, both in terms of breadth and depth. This imbalance in exploration often leads to suboptimal results, regardless of the model’s overall capability. Additionally, around 40% of the errors were attributed to other factors. For more powerful models, over-reasoning was common, where the model made assumptions not fully supported by the data. In other cases, models misinterpreted the insights, such as mistaking a downward trend for an upward one. Less capable models, on the other hand, tended to make more fundamental errors, such as repeatedly debugging or struggling with missing data, which could disrupt the overall coherence of the analysis.

Self-Termination

Analyze the willingness of models to terminate their own analysis.

MIMIC

GLOBEM

10-K

Clear differences emerge across model generations. Qwen3 and Qwen3-Next exhibit a consistently increasing probability, indicating growing confidence that a complete report can be produced as more information is accumulated, whereas the Qwen2.5 series shows pronounced fluctuations and remains uncertain about whether exploration can be terminated at the current step. Moreover, Qwen3-Next maintains higher confidence with lower variance throughout, suggesting that it has more confidence that exploration is progressing towards a more comprehensive and deeper report.