SUPER RESEARCH Answering Highly Complex Questions with Large Language Models through Super Deep and Super Wide Research

1Zhejiang University, 2Ant Group

Performance Landscape

Comprehensive evaluation of research systems. Use the controls below to filter by model type or methodology.

Evaluation of representative research systems across three architectural paradigms: Deep Research System, Native Search-Integrated Agent, and Search-Augmented Agent. Bold and underline denote the best and second-best performance, respectively; $\uparrow$ ($\downarrow$) indicates that higher (lower) values are preferred.

Abstract

While Large Language Models (LLMs) have demonstrated proficiency in Deep Research or Wide Search, their capacity to solve highly complex questions—those requiring long-horizon planning, massive evidence gathering, and synthesis across heterogeneous sources—remains largely unexplored. We introduce Super Research, a task for complex autonomous research tasks that integrates (i) structured decomposition into a research plan, (ii) super wide retrieval for diverse perspectives, and (iii) super deep investigation to resolve uncertainties through iterative queries. To evaluate this capability, we curated a benchmark of 300 expert-written questions across diverse domains, each requiring up to 100+ retrieval steps and 1,000+ web pages to reconcile conflicting evidence. Super Research produces verifiable reports with fine-grained citations and intermediate artifacts (e.g., outlines and tables) to ensure traceable reasoning. Furthermore, we present a graph-anchored auditing protocol that evaluates Super Research along five dimensions: Coverage, Logical Consistency, Report Utility, Objectivity and Citation Health. While super-complex questions may be infrequent in standard applications, Super Research serves as a critical ceiling evaluation and stress test for LLM capabilities. A model's proficiency within Super Research acts as a powerful proxy for its general research competence; success here suggests the robustness necessary to navigate nearly any subordinate research task.

Introduction Concept

Methodology & Framework

Super Research employs a structured graph-based reasoning approach to handle high-stakes analytics.

Super Research Pipeline
Overview of the SuperResearch Benchmark framework. (a) Construction Pipeline: The process starts with the joint definition of 300+ "super hard" open-ended tasks, which undergo rigorous expert vetting. Autonomous agents then execute a long-horizon research process involving 100+ retrieval steps and the synthesis of 1,000+ web pages. The resulting "Gold Standard" consists of a structured Research Graph, canonical reports, and a question-answer (QA) exam. (b) Evaluation Flow: Research reports are audited via Research Graph Projection. The system maps claims from the generated report onto the ground-truth Research Graph to verify Nodes Recall (categorized into atomic facts and insights) and the integrity of Logical Connections, ensuring high-level conclusions are grounded in verifiable evidence. (c) Metrics Suite: A comprehensive five-dimensional suite quantifies model performance.
Graph Reasoning Structure

Research Graph Construction Pipeline

The process transforms unstructured sub-reports into a structured knowledge graph in three pivotal stages:

  • (a) Fact Extraction: Decomposing unstructured text into atomic fact nodes anchored to specific URLs.
  • (b) Insight Abstraction: A collaborative human-AI process that derives higher-order reasoning nodes from fact clusters to build a bottom-up logic topology.
  • (c) Global Synthesis: Merging disparate evidence clusters inter-connected global conclusions that serve as the ground truth.

Data Analysis & Distribution

Detailed analysis of the benchmark data distribution, core metrics, and cross-domain capabilities.

Data Distribution
Structural and Functional Characterization of the SuperResearch Benchmark. (a) Quantitative Distribution: A Rose Chart illustrating the distribution of 300 expert-written tasks across 10 specialized domains. (b) Core Benchmark Metrics & Scale: Quantitative statistics characterizing the "ceiling-level" challenge across four key dimensions: Complexity Metrics (measuring reasoning depth and retrieval breadth), Report Statistics (tracking content volume and structure), Graph Composition (quantifying hierarchical knowledge density), and Evaluation Questions. (c) Example Tasks: Representative inquiries exhibiting multi-objective trade-offs and conflicting evidence.
Domain Benchmarking

Cross-Domain Capabilities of LLMs

Performance comparison across 10 specialized domains highlights the varying strengths of different model tiers:

  • Comprehensive Evaluation: The radar chart illustrates the overall capability score for each model tier across diverse disciplines.
  • Domain Dominance: Advanced systems like Gemini Deep Research show particular dominance in business-analytical and technical-heavy sectors.
  • Identify Weaknesses: Pinpoints specific areas where native search or augmented baselines struggle with deep evidence synthesis compared to leading Agent frameworks.

Benchmark Comparison

A comparative analysis of Super Research against existing agentic evaluation paradigms.

Dimensions Metric GAIA (Level 3) WideSearch DeepResearch Bench Super Research (Ours)
Task Scope Goal Precise Solving Aggregation Report Gen. Strategic Planning & Discovery
Paradigm General Assistant Info-Seeking Research Agent Super Deep Investigation
Complexity Depth (Steps) 10-40 ~1.2 10-20 ~100 (Max 140+)
Width (Pages) < 20 ~44 ~110 ~600 (Max 1,200+)
Evaluation Method Exact Match F1 Score Ref-Based Judge Graph-Anchored Auditing

Comparison of Super Research with Existing Agentic Benchmarks. Super Research represents a "Ceiling-Level" challenge compared to existing paradigms. While Wide Search focuses on horizontal data acquisition and DeepResearch Bench prioritizes vertical synthesis, Super Research integrates Super Wide Retrieval and Super Deep Investigation. Moreover, to address the limitations of shallow fact-recall metrics, we propose a Graph-Anchored Auditing protocol that can comprehensively evaluate reasoning depth, bias, and uncertainty.

Graph-Anchored Auditing Validation

Robust validation demonstrating the effectiveness and stability of our Graph-Anchored Auditing protocol.

Evaluator Responsiveness Comparison
Metric Evaluator Norm STD ($\sigma_{norm} \downarrow$)
Coverage ($\mathcal{R}$) Ours Graph 4.42%
LLM Judge 3.88%
Consistency ($\mathcal{C}$) Ours Graph 0.83%
LLM Judge 4.69%
Utility ($\mathcal{U}$) Ours Q&A 2.05%
LLM Judge 4.37%
Objectivity ($\mathcal{O}$) Ours Q&A 1.07%
LLM Judge 7.48%
Overall Ours Method 1.67%
LLM Judge 3.02%

Evaluation Sensitivity Analysis. Compared to the LLM Judge, our Graph Metric shows superior responsiveness to quality fluctuations and significantly lower variance. $\sigma_{norm}$ indicates the fluctuation as a percentage of the total score range (lower is better).

Super Research Interface

We provide a comprehensive open-source web interface that empowers users to interactively construct complex research queries, monitor real-time agent execution (e.g., token usage and domains explored), verify fine-grained text citations, and visually audit the underlying reasoning graphs.

Editor Workbench

Figure 1: Editor Workbench

Formulate complex research plans and initialize agents.

Inspector Mode

Figure 2: Inspector Mode

Real-time tracking of token usage, source domains, and cost.

Detail View

Figure 3: Detail View

Verify fine-grained citations and intermediate reasoning artifacts.

Graph View

Figure 4: Graph View

Visualize the structural topology of extracted facts and insights.

Interactive Visualization

Explore the Structured Reasoning process interactively.
Live Demo The visualization below loads the research process from annotation.json.

BibTeX

@article{superresearch2026,
  title={Super Research: Answering Highly Complex Questions with Large Language Models though Super Deep and Super Wide Research},
  author={Yubo Dong, Nianhao You, Yuxuan Hou, Zixun Sun, Yue Zhang, Liang Zhang, Siyuan Zhao, Yi Lin, Hehe Fan},
  journal={ArXiv},
  year={2026}
}