Framework Overview
Overview of DDR-Bench.
Task Formulation
A case of Claude Sonnet 4.5's trajectory and evaluation checklist in the MIMIC scenario of DDR-Bench. Verified fact and supporting insights are underlined. The agent is asked to perform multiple ReAct turns to explore the database without predefined targets or queries, autonomously mine insights from the exploration.
Evaluation Pipeline
Left: Compared with previous tasks, DDR maximises exploration openness and agency, focusing on the direct evaluation of insight quality. Right: Overview of the DDR-Bench. The checklist derived from the freeform parts of the database is used to evaluate the agent generated insights from the exploration on the structured parts of the database.
Agent Trajectory
Observe the autonomous decision-making process of the agent across different scenarios.
Exploring clinical patterns and patient outcomes in a large-scale electronic health record (EHR) database.
Benchmarking
Overall average accuracy across all scenarios and evaluation metrics.
Purple = Proprietary
Green = Open-source
Claude 4.5 Sonnet achieves the highest overall average accuracy at 47.73%, significantly outperforming other models. Among open-source models, DeepSeek-V3.2 leads with 38.80%, followed closely by GLM-4.6 (37.52%) and Kimi K2 (36.42%). The results demonstrate a clear performance gap between frontier proprietary models and open-source alternatives, though top open-source models remain competitive with mid-tier proprietary offerings.
Experiments
Main benchmark results and in-depth analysis of agent capabilities.
Overall Performance
Systematic evaluation of mainstream LLMs across MIMIC, 10-K, and GLOBEM datasets reveals persistent limitations in frontier models.
Training-time Factors Analysis
Training-time factors study within the Qwen family. From left to right, the three columns examine inference-time scaling performance across all scenarios for models with different parameter scales, context optimisation methods, and model generations with different training strategies.
Reasoning Budget
Increasing the reasoning budget reduces interaction rounds but illustrates a trade-off between reasoning depth and exploration efficiency.
Memory Mechanism
Long-short-term memory can create unpredictable behavior, often increasing tool usage without consistently improving final accuracy.
Proactive vs Reactive
Models perform significantly better with explicit queries (Reactive), highlighting the difficulty of true proactive goal formulation.
Hallucination Analysis
Hallucination rates (%) across models in DDR-Bench, measured as the proportion of insights containing factual but unfaithful information that are not derivable from the provided inputs, which is low.
Hallucination-Accuracy Correlation
Hallucination rates show almost no correlation with final accuracy, indicating robustness against metric inflation via memorization.
Trustworthiness
Verification of the LLM-as-a-Checker pipeline demonstrating high alignment with human expert judgments, and it is stable across multiple runs.
Scaling Analysis
Explore how model performance scales with interaction turns, token usage, and inference cost.
MIMIC
10-K
GLOBEM
LLMs extract more accurate insights from delaying commitment, and they concentrate reasoning into a small number of highly valuable late-stage interactions. These targeted interactions are built upon longer early exploration.
Novelty vs Accuracy
Novelty (Bradley-Terry) vs Accuracy ranking
● = Novelty, ◇ = Accuracy.
Purple = Proprietary
Green = Open-source
MIMIC
10-K
GLOBEM
The ranking induced by novel insight usefulness closely aligns with the ranking based on checklist accuracy. Differences between the two rankings are small, especially among the top-performing models.
Turn Distribution
Analyze the distribution of interaction turns across different models and datasets.
MIMIC
10-K
GLOBEM
Stronger models tend to explore for more rounds without external prompting. Knowledge-intensive databases such as 10-K and MIMIC induce more interaction rounds than signal-based datasets such as GLOBEM, and the resulting distributions are also more uniform.
Exploration Pattern
Scatter plot showing Access Entropy vs Coverage by model. Opacity represents accuracy. Higher entropy = more uniform access; Higher coverage = more fields explored.
GPT-5.2
Claude-4.5-Sonnet
Gemini-3-Flash
GLM-4.6
Qwen3-Next-80B-A3B
DeepSeek-V3.2
Advanced LLMs tend to operate in a balanced exploration regime that combines adequate coverage with focused access. Such a regime is consistently observed across different scenarios.
Error Analysis
Breakdown of error types encountered during agent interactions, grouped by main categories.
Our findings revealed that 58% of errors stemmed from insufficient exploration, both in terms of breadth and depth. This imbalance in exploration often leads to suboptimal results, regardless of the model’s overall capability. Additionally, around 40% of the errors were attributed to other factors. For more powerful models, over-reasoning was common, where the model made assumptions not fully supported by the data. In other cases, models misinterpreted the insights, such as mistaking a downward trend for an upward one. Less capable models, on the other hand, tended to make more fundamental errors, such as repeatedly debugging or struggling with missing data, which could disrupt the overall coherence of the analysis.
Self-Termination
Analyze the willingness of models to terminate their own analysis.
MIMIC
GLOBEM
10-K
Clear differences emerge across model generations. Qwen3 and Qwen3-Next exhibit a consistently increasing probability, indicating growing confidence that a complete report can be produced as more information is accumulated, whereas the Qwen2.5 series shows pronounced fluctuations and remains uncertain about whether exploration can be terminated at the current step. Moreover, Qwen3-Next maintains higher confidence with lower variance throughout, suggesting that it has more confidence that exploration is progressing towards a more comprehensive and deeper report.