Deconstructing the "Deep" vs. "Wide" Paradigms: An Architectural Analysis of Google's Gemini and Manus Research Agents
Executive Summary: The "Deep" vs. "Wide" Philosophies of Agentic Research
This report provides a comparative analysis of two frontier systems for artificial intelligence (AI) research: Google's Gemini Deep Research and Manus's Wide Research. The investigation addresses the request to understand and replicate these "top 2" processes. The analysis reveals that these systems are not merely competitors but represent two fundamentally divergent—and complementary—architectural philosophies that define the current landscape of agentic AI.
- The "Deep" Paradigm (Google Gemini): This approach is defined by a vertical, iterative, and coherent research process. It is executed by a single, monolithic agent 1 built upon a powerful foundation model (Gemini 2.5).2 This architecture is optimized for synthesis, accuracy, and discrepancy detection on a singular, complex topic. Its power is derived from its "Deep Think" reasoning engine.2
- The "Wide" Paradigm (Manus): This approach is defined by a horizontal, parallel, and distributed research process. It is executed by a multi-agent swarm 1, which functions as a sophisticated orchestration layer on top of existing third-party models (like Anthropic's Claude).6 This architecture is optimized for scale, speed, and consistency across a massive number of discrete items. Its power is derived from its "orchestration layer" and its solution to context pollution.4
Replicating these processes requires mastering two distinct sets of capabilities. Replicating the "Deep" paradigm necessitates advanced reasoning engineering and a rigorous, iterative workflow. Replicating the "Wide" paradigm requires building a robust multi-agent orchestration and task-decomposition system. This report will provide a technical blueprint for deconstructing and replicating both.
Part I: Deconstructing the "Deep" Paradigm: The Google Gemini Deep Research Process
The Gemini Deep Research process exemplifies a "vertical" architecture, functioning as a single, powerful agent optimized for depth, coherence, and synthesis on complex, singular topics.
A. The User-Facing Workflow: A "Measure Twice, Cut Once" Approach
The user-facing workflow for Gemini Deep Research is a deliberate, multi-step process designed to align the agent's computationally expensive task with the user's precise needs before execution.
- Initiation and Source Selection: The process begins when the user selects the "Deep Research" feature from the "Tools" menu.8 A critical, distinguishing feature is the immediate prompt for source selection. The user can authorize the agent to search not only the public web ("Search") but also to integrate personal data from "Gmail, Drive, and/or Chat".8
- The Critical "Tweak the Plan" Step: After receiving the prompt, Gemini does not immediately execute. Instead, it automatically constructs a "multi-step research plan".8 The user is then presented with this plan and given the explicit option to "tweak the plan to fit your needs".11 This "tweak" step is a crucial design choice and a deliberate failsafe. Google's own technical documentation identifies "user wait time" and "long-running inference" 14 as significant engineering challenges. A typical Deep Research query takes "a few minutes" 13 or requires the user to "plan to do something else".10 If a user waits this long only to receive a report based on a misinterpreted prompt, user trust is eroded. This pre-execution alignment step functions as a "measure twice, cut once" safeguard, ensuring the high compute and time cost of the run is correctly-aligned before it begins.
- Autonomous Execution and Synthesis: Once the plan is approved, the AI agent autonomously searches the selected sources 8 and "pulls in reliable and relevant data".11 The system is designed to be comprehensive, reviewing "over a hundred sources" to provide a full view of the topic.13 It then synthesizes these findings into a clear, detailed, multi-page report.11
- Personalized Context Integration: The integration of personal data 8 transforms the agent from a general researcher into a personalized research assistant. A review of the feature 10 provides a cogent example: the agent was tasked with analyzing a private chat discussion. In response, it not only analyzed the chat log itself but also identified articles shared within that chat, autonomously navigated to and analyzed the websites referenced in those documents, and then synthesized insights from all sources (chat, shared articles, and referenced websites) into its final report. This implies a sophisticated, on-the-fly knowledge graph-building process. The agent is not merely performing a federated search of (Web) + (Drive). It is (1) Analyzing personal context, (2) Extracting entities and links from that context, (3) Using those entities to form new, grounded search queries for the web, and (4) Synthesizing the results from all sources.
- Delivery: The final synthesized report is delivered within the Gemini interface and is exportable to Google Docs 8 or can be converted into AI-generated "Audio Overviews" or podcasts.8
B. The Architectural Core: The Gemini 2.5 "Deep Think" Engine
The power of the Deep Research feature is derived from the advanced reasoning capabilities of the underlying Gemini 2.5 model, specifically a mode called "Deep Think."
- "Deep Think" as an Enhanced Reasoning Mode: "Deep Think" is not the default behavior of Gemini 2.5. It is a specific, enhanced reasoning mode 3 that users must select for their "most complex tasks".17 This mode is specifically designed for "highly-complex use cases like math and coding" 18 and has been benchmarked on difficult tasks such as Olympiad-level math (USAMO 2025) and competitive coding (LiveCodeBench).2
- The "Parallel Thinking" Internal Monologue: Technical descriptions state "Deep Think" uses "cutting edge research techniques in parallel thinking and reinforcement learning".3 This "parallel thinking" 20 is not the same as the parallel execution of multiple agents (as seen in Manus). Gemini Deep Research is a single agent system.1 Therefore, "parallel thinking" must be interpreted as an internal reasoning technique. This is confirmed by technical documents, which state "Deep Think" is a process where the model can "creatively produce multiple hypotheses" 2 and "consider multiple hypotheses" 18, then "carefully critique them" 2 before arriving at the final answer. This describes an implementation of a "Tree of Thoughts" (ToT) or similar search-based reasoning approach.21 Where a standard model might follow a single chain-of-thought, the "Deep Think" engine appears to explore multiple reasoning paths internally, evaluate their viability, prune failed or suboptimal paths, and then pursue only the most promising one to generate a response. This allows it to solve complex problems requiring creativity and strategic planning.3
- The Cost of "Thinking": This advanced reasoning is computationally expensive. The user must opt-in to the mode 17 and wait "generally in a few minutes" 17 for the response. This trade-off—deeper reasoning for more time and compute—is precisely what necessitates the "tweak the plan" step 11 in the user-facing workflow.
C. The Agentic Loop: Multi-Step Planning and Iterative Reasoning
The "Gemini Deep Research" feature is not just the Gemini 2.5 model; it is an agentic system built around the model. This system follows a formal, multi-stage loop to manage the research task.
- Stage 1: Planning (Problem Decomposition): The system first "formulates a detailed research plan, breaking the problem into a series of smaller, manageable sub-tasks".14 This plan is "personalized" 14 and, as noted, user-editable.11
- Stage 2: Searching (Intelligent Execution): The agent "autonomously searches" 14 and "deeply browses" the selected web and personal sources. The execution of this plan is not naively sequential. The system "intelligently determines which sub-tasks can be tackled simultaneously and which need to be done sequentially".14 This description implies the agent functions as a Directed Acyclic Graph (DAG) executor. It builds a dependency graph of the sub-tasks from its plan. Tasks with no dependencies (e.g., "Find 2024 market data," "Find 2025 market data") can be run in parallel. A task that depends on them (e.g., "Compare 2024 and 2025 data") must run sequentially after they complete. This is a far more efficient optimization than a simple linear checklist.
- Stage 3: Reasoning (Iterative Grounding): This is the core of the "deep" process. The agent "reasons over information gathered iteratively and thinks before making its next move".14 This process is made transparent to the user via a "thought-panel that lets you track its reasoning in real time".15
- Stage 4: Reporting (Synthesis): Once the model's iterative process determines that "enough information has been gathered," it "synthesizes the findings into a comprehensive report".14
D. Core Technical Challenges (Identified by Google)
Google's own documentation 14 outlines the three significant engineering hurdles overcome in building this system, which in turn reveal its architecture:
- Multi-step planning: The agent must "ground itself on all information gathered so far, then identify missing information and discrepancies it wants to explore." This confirms the iterative, stateful nature of the process.
- Long-running inference: The task involves "many model calls over several minutes."
- Context management: The agent must "process hundreds of pages of content" during a single research session.
These three challenges are facets of a single, larger problem: creating a stateful, long-horizon agent. "Long-running inference" 14 implies that the agent's state (its plan, its memory of findings) must be preserved between the "many model calls".14 "Context management" 14 is the problem of fitting this massive, growing state into the model's view. "Multi-step planning" 14 is the logic the model uses to update its state. This confirms that Gemini Deep Research is a complete agentic system built around the core LLM, consisting of a task planner, a state/memory manager, and an execution loop.
Part II: Deconstructing the "Wide" Paradigm: The Manus Multi-Agent Process
Manus represents a fundamentally different "horizontal" architecture. It is a sophisticated orchestration layer that leverages third-party models to execute research tasks at a massive parallel scale.
A. The Architectural Core: An Orchestration Layer, Not a Model
- Manus as an Integrator: Unlike Google, which builds its own foundation model, Manus is an "integration of existing powerful models".6 It is not a new LLM.6
- The Models Behind the Curtain: The system is "built on Anthropic's Claude 3.7 Sonnet" 7 or "Claude 3.5 Sonnet" 5, and reportedly "customized versions of Alibaba's Qwen".5 Some critics have noted this, with one claiming Manus is "just a wrapper around Browser Use with Claude 3.7 Sonnet".22
- The "Secret Sauce" is Orchestration: The true innovation of Manus is not model capability but agentic architecture. As one analysis states, "Butterfly Effect has created something greater than the sum of its parts — not by developing a fundamentally new AI model, but by orchestrating existing technologies in a novel way".6 Another calls it "a testament to the power of combining existing AI models with innovative tooling".7 This is a critical takeaway: one can build a frontier-level research agent without training a new foundation model. The value is in the orchestration system built around existing, commercially available models.
B. The "Wide Research" Workflow: True Parallel Processing
The "Wide Research" feature from Manus 4 is explicitly contrasted with "Deep Research," which it describes as "sequential, in-depth exploration".5 "Wide Research" is designed for parallel scale.
- The Parallel Swarm: It works by deploying "over 100 parallel AI agents" 5 to "break complex tasks into parallel sub-tasks and process them all at once".23
- The Workflow: The process is a classic example of multi-agent orchestration 4:
- Orchestrator (Main Agent): The "Main agent" 4 breaks the user's request (e.g., "Analyzed 100 sneaker models with detailed comparisons" 4) into hundreds of independent sub-tasks.
- Distribution (Main Agent): The Main Agent distributes these tasks to a swarm of "Sub-agents".4
- Parallel Execution (Sub-Agents): "Each sub-agent runs independently with full capabilities: its own VM, tools, and internet access".4
- Synthesis (Main Agent): The "Main agent" 4 gathers all the results and "synthesizes the final report".4
- Solving Context Pollution: This "Wide Research" architecture 4 is a structural solution to the "context window overload" problem that plagues monolithic agents. One description 4 explicitly details the failure mode of traditional AI: "Traditional AI has a fixed 'memory.'... By #50, you're getting generic filler. Less room = less quality." Manus's solution is not a bigger context window, but a smarter architecture. By giving "Fresh context for every item" 4, the orchestrator (Main Agent) never sends the full, polluted context to the sub-agents. It sends only the specific sub-task (e.g., "Analyze Sneaker Model X"). This means Sub-Agent #1 (for Sneaker #1) and Sub-Agent #100 (for Sneaker #100) have identical, clean, un-polluted context windows. This guarantees "consistent, thorough analysis at any scale".4
C. The Agentic Engine: The "29 Tools" and the Executor/Planner Loop
The power of each Manus sub-agent comes from its access to a wide array of external tools and a robust execution loop.
- The "Arsenal" of Tools: The agent's power is derived from its "arsenal of 29 specialized tools".6 These are not just model capabilities but external software the agent can call, including "web browsing, API interaction, and script execution".6
- The Agent Loop as an Operating System: Technical details 7 provide a complete blueprint for the Manus agent. It functions as a mini-Operating System, with the Claude model acting as the "kernel" that decides which action to take next. This system is far more than a simple script; it is a robust, stateful "agent loop" that consists of:
- A Two-Agent Core: A "planner agent" 6 for high-level strategy and an "executor agent" 6 for user chat and tool execution.
- A Formal Loop: (1) Analyze Events, (2) Select Tool, (3) Wait for Execution, (4) Iterate, (5) Submit Results, (6) Standby.7
- Persistent Memory: A "To-Do Rules" system that must create and update a todo.md file.7 This file acts as a persistent, external "checklist" or "memory," ensuring the agent can track long-horizon tasks.
- Modules: The agent is equipped with a "Planner Module," "Knowledge Module," and "Datasource Module".7
- Strict "Rules": It operates under explicit "File Rules," "Browser Rules," "Coding Rules," and "Shell Rules" 7 that govern how the agent uses its 29 tools.
- A Sandbox: A "Linux sandbox environment with internet connection" 7 is provided for safe "script execution".6
D. The Underlying Reasoning: Anthropic's "Hybrid Reasoning" and Constitutional AI
The "brain" of the Manus agent is a third-party model, primarily from Anthropic.6 Therefore, its reasoning capability is defined by Anthropic's own research.
- Constitutional AI (The "Why"): The underlying Claude models are governed by "Constitutional AI" (CAI).24 This is Anthropic's core alignment technique. It involves a supervised phase where the model "self-critiques" and revises responses based on a "constitution" (a list of principles) 25, followed by a reinforcement learning (RL) phase (RLAIF) where an AI preference model (trained on the constitution) provides the reward signal.25 This is how the model learns to be "harmless but non-evasive".26
- "Hybrid Reasoning" (The "How"): The newer Claude models (Opus 4, Sonnet 4) that Manus leverages 27 feature "hybrid reasoning".28 This "hybrid reasoning" is a model-level feature that achieves what Google's "Deep Think" achieves at a system-level. The Claude model can itself decide when a query is simple and needs a "quick, 'instinctive' response," or when it is complex and needs to "engage in longer, chain-of-thought reasoning".28 This is a more elegant and efficient solution, as the model can dynamically allocate compute ("thinking") based on the task's complexity, rather than requiring the user to pre-select a "deep think" mode 17 as Gemini does. This "extended thinking with tool use" 27 is what makes the Claude models so effective for the "sustained performance on complex, long-running tasks" 27 that the Manus agent 6 requires.
Part III: Comparative Analysis: Deep vs. Wide, Model vs. Orchestrator
This direct comparison highlights the fundamental architectural and philosophical trade-offs between the two systems.
A. Philosophical Divergence: Vertical (Depth) vs. Horizontal (Scale)
The two systems are optimized for entirely different tasks.
- Gemini "Deep" (Vertical): This system is optimized for coherence. The single-agent 1 iteratively builds a deep, unified understanding of one topic. By processing all gathered information within its massive context 12, it can perform the unique and critical task of "identify[ing] missing information and discrepancies" 14 between disparate sources. Its goal is a single, deeply synthesized, and coherent report.
- Manus "Wide" (Horizontal): This system is optimized for scale. The multi-agent swarm 4 is designed to "analyze 100 sneaker models" 4 without quality degradation. Its goal is to produce 100 consistent mini-reports that are then assembled by a main agent.4
B. The Great Architectural Trade-Off: Coherence vs. Scale
The two architectures reveal a fundamental trade-off in agentic research.
- Gemini's Risk (Context Overload): Gemini's "Deep" approach 14 is powerful but risks failure at extreme scale. A single agent, even with a 2-million token context window 12, can eventually be overwhelmed, leading to the "filler" problem 4 that Manus was built to solve.
- Manus's Risk (Synthesis Bottleneck): Manus's "Wide" approach 4 solves the scale problem but introduces a new one: the final synthesis. The "Main Agent" 4 must synthesize 100 independent reports without the sub-agents having "talked to each other".4 This final synthesis step is a massive coherence bottleneck. This architectural trade-off directly explains user complaints that Manus is "PAINFULLY. Slow".32 The "contemplate your existence while watching the cursor blink" 32 is not the parallel research; it is likely the Main Agent struggling to serially synthesize the 100 parallel reports it just received.
C. Table: Comparative Architectural Analysis: Gemini Deep Research vs. Manus Wide Research
This table provides a distilled comparison of the two systems, serving as a guide for deciding which process to replicate.
Metric | Google Gemini Deep Research ("Deep") | Manus Wide Research ("Wide") |
Core Philosophy | Vertical (Depth): Monolithic agent for deep, coherent synthesis on one complex topic.1 | Horizontal (Scale): Multi-agent swarm for consistent, parallel analysis of many items.1 |
Agent Architecture | Single, Monolithic Agent.1 A "Deep Think" reasoning engine 3 within an agentic scaffold.14 | Multi-Agent Orchestration Layer.1 A "Main Agent" (orchestrator) managing 100+ "Sub-Agents" (workers).4 |
Base Model(s) | Google Gemini 2.5 Pro.2 A single, highly-capable, proprietary model. | 3rd Party Models: Anthropic's Claude 3.5/3.7 Sonnet, Alibaba's Qwen.5 Not a base model. |
Reasoning Mechanism | "Deep Think" 2: A high-compute, opt-in mode 17 for internal parallel hypothesis generation and critique ("Tree of Thoughts"). | "Hybrid Reasoning" 28: A model-native ability (from Claude) to dynamically choose "fast" or "extended" thinking. |
Key "Secret Sauce" | The "Deep Think" reasoning engine 2 and its massive 1M-2M token context window.12 | The Orchestration Layer 6, the "29 Tools" 7, and the "Fresh Context" parallel swarm.4 |
Solves For... | Coherence & Discrepancy: Can "identify missing information and discrepancies" 14 between sources. | Scale & Consistency: Solves "context pollution," ensuring item #100 gets the same quality analysis as item #1.4 |
Known Limitation | Context Overload: At massive scale, a single agent may fail or "hallucinate.".4 | Synthesis Bottleneck: Final synthesis by the "Main Agent" is "PAINFULLY. Slow.".32 Sub-agents cannot cross-reference.4 |
Part IV: A Framework for Replication: Translating AI Processes to Human-Centric Workflows
This section provides an actionable guide to "replicate the process", both for human researchers and for developers building custom AI agents.
A. Replicating the "Deep Research" (Gemini) Model: The Iterative Synthesis Loop
This workflow is a direct, human-replicable model of the scientific method, as formalized by the Gemini agent.14 It is ideal for a single researcher or a small, tightly-coupled team tackling a complex problem.
- Phase 1: Explicit Planning: Before beginning research, explicitly write down a multi-step research plan.14 This plan should decompose the complex question into manageable sub-tasks.12 This mirrors Gemini's "multi-point research plan" 14 and serves as the human-led version of the "tweak the plan" 11 step.
- Phase 2: Iterative Research & Reasoning: Execute the first step of the plan. As you gather information, maintain a "thought" log, mimicking Gemini's "thought-panel".15 For example: "Finding 1: Source A claims X. My 'thought' 14 is that this seems to contradict my prior knowledge. My 'next move' 14 is to find a source that verifies or refutes this." Then, ground the next research step "on all information gathered so far".14 "Finding 2: Source B directly contradicts Source A, claiming Y." Your reasoning log now reflects a new state: "I have identified a discrepancy".14 The plan must now be updated with a new sub-task: "Resolve the X vs. Y discrepancy."
- Phase 3: Synthesis: Continue this iterative loop of planning, researching, grounding, and identifying new discrepancies 14 until all plan steps are complete and all identified discrepancies are resolved. Only then should the final synthesis and writing of the report begin.14
The power of the Gemini "Deep Research" process 14 is its formalization of expert human research. Replicating it means adopting this explicit, stateful, and iterative rigor 33 and refusing to skip the critical reasoning and discrepancy-resolution steps.
B. Replicating the "Wide Research" (Manus) Model: The Parallel Orchestration Workflow
This workflow applies "parallel thinking" 21 and is ideal for a research lead (the Orchestrator) managing a large team (the Sub-Agents) or for an individual "power user" batch-processing tasks with AI.
- Phase 1: Orchestration (Human as "Main Agent"): Define the large-scale task (e.g., "Analyze 100 sneaker models" 4). Deconstruct this into 100 identical, independent, and templated sub-tasks. An example sub-task template: "For, find: (1) Price, (2) Key Features, (3) Average Review Score. Return as a JSON object."
- Phase 2: Distribution (Human as "Main Agent"): Assign each sub-task to a "Sub-Agent".4 A Sub-Agent can be a human team member or, critically, an AI. To replicate the Manus architecture, this would involve opening a new, fresh chat (i.e., "fresh context" 4) for each of the 100 sneakers.
- Phase 3: Parallel Execution (Sub-Agents): Each Sub-Agent (human or AI chat) executes only their one templated task. They are forbidden from "talking to each other".4 This is the key to preventing "context pollution" 4 and groupthink. Each agent returns only their small, structured JSON output.
- Phase 4: Synthesis (Human as "Main Agent"): The human Orchestrator collects all 100 JSON objects and performs the final, high-level synthesis.4 By placing the human at the synthesis step, this workflow avoids the "painfully slow" AI synthesis bottleneck 32 that Manus faces.
C. The Replicable Toolkit: Frameworks and Methodologies
For those seeking to build these systems, academic papers on "multi-agent research assistants" 12 point to specific, replicable frameworks.
These systems are described as "graphs of interconnected LLM-powered 'experts' (e.g. a data collector, an analyst, a writer) with built-in memory and quality-control loops".12 The framework explicitly named in this context is LangGraph.12
- To Replicate Gemini (Deep): Use a framework like LangGraph to build a sequential graph with a persistent state object (the "memory") and loops. This architecture allows the agent to "ground itself on all information gathered so far" 14 and iteratively update its plan, perfectly mimicking the Gemini process.
- To Replicate Manus (Wide): Use the same framework to build a "main graph" (the Orchestrator) that dynamically spawns 100 "sub-graphs" (the Sub-Agents). Each sub-graph runs in parallel, and the main graph then joins the results at a final node for synthesis.4
- To Replicate Manus's Agent (Executor): The core Manus agent 7 can be replicated by implementing its "agent loop" (Analyze, Select Tool, Wait, Iterate) and connecting it to a "tool belt" (the 29 tools) using a model's (like Claude's) native "tool use" capability.27
Part V: Strategic Synthesis and Future Outlook: The "Wide-then-Deep" Hybrid
The "Deep" vs. "Wide" 1 debate presents a false dichotomy. The most powerful research system would be a hybrid that combines the strengths of both architectures. This "Wide-then-Deep" process is the ultimate replicable workflow.
- Phase 1 (Wide): Deploy a "Wide Research" multi-agent swarm (like Manus) 4 to parallel-process a massive dataset. Task: "Scan 1,000 research papers and extract their key methodologies and findings into a structured JSON format." This leverages the "Wide" architecture for what it does best: scalable, consistent data processing at speed.
- Phase 2 (Deep): Feed the synthesized outputs from all 1,000 sub-agents (e.g., a single large document containing all 1,000 JSON objects) into the single, massive context window (e.g., Gemini 2.5 Pro's 2M-token context) 12 of a "Deep Research" monolithic agent.2
- Phase 3 (Synthesis): Task this "Deep" agent with the one thing the "Wide" agent cannot do: "Analyze the 1,000 methodologies and identify discrepancies, gaps in the literature, and novel connections 14 between them."
This "Wide-then-Deep" architecture leverages the "Wide" swarm for scalable data processing and the "Deep" agent for coherent, deep synthesis. This hybrid model, not one or the other, represents the true frontier of agentic research and is the most powerful and comprehensive replicable process.
Works cited
- # In-Depth Analysis of the Latest Deep Research Technology: Cutting-Edge Architecture, Core… | by Gradient Explosion | Sep, 2025 | Medium, accessed November 7, 2025, https://medium.com/@modelscope2022/in-depth-analysis-of-the-latest-deep-research-technology-cutting-edge-architecture-core-b052796fe8fb
- Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities., accessed November 7, 2025, https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf
- Gemini - Google DeepMind, accessed November 7, 2025, https://deepmind.google/models/gemini/
- Wide Research: Beyond Context Window - Manus, accessed November 7, 2025, https://manus.im/features/wide-research
- How Manus Just Reinvented the Way You Should Do Research | by Ashley | Towards AGI, accessed November 7, 2025, https://medium.com/towards-agi/how-manus-just-reinvented-the-way-you-should-do-research-11f591274180
- Manus AI: The World's First Truly Autonomous AI Agent? | by Cogni Down Under | Medium, accessed November 7, 2025, https://medium.com/@cognidownunder/manus-ai-the-worlds-first-truly-autonomous-ai-agent-16ebb065bb0a
- MANUS AI: Redefining AI Agents with Existing Models and Brilliant ..., accessed November 7, 2025, https://rediminds.com/future-edge/manus-ai-redefining-ai-agents-with-existing-models-and-brilliant-tooling/
- Google updates Gemini's Deep Research mode to scan Gmail ..., accessed November 7, 2025, https://timesofindia.indiatimes.com/technology/tech-news/google-updates-geminis-deep-research-mode-to-scan-gmail-drive-and-chat-for-ai-research-reports/articleshow/125130778.cms
- Use Deep Research in Gemini Apps - Android - Google Help, accessed November 7, 2025, https://support.google.com/gemini/answer/15719111?hl=en
- I let Gemini Deep Research dig through my Gmail and Drive - here's ..., accessed November 7, 2025, https://www.zdnet.com/article/i-let-gemini-deep-research-dig-through-my-gmail-and-drive-heres-what-it-uncovered/
- accessed November 7, 2025, https://www.revolgy.com/insights/blog/smarter-way-to-research-with-google-gemini-deep-research#:~:text=You%20can%20just%20type%20in,report%20%E2%80%94%20all%20in%20one%20place.
- AI Agents for Economic Research: August 2025 Update to “Generative AI for Economic Research: Use Cases and Implications for Economists,” - American Economic Association, accessed November 7, 2025, https://www.aeaweb.org/articles/materials/23826
- The smarter way to research with Google Gemini Deep Research, accessed November 7, 2025, https://www.revolgy.com/insights/blog/smarter-way-to-research-with-google-gemini-deep-research
- Gemini Deep Research — your personal research assistant, accessed November 7, 2025, https://gemini.google/overview/deep-research/
- How Does Gemini's Deep Research Work? | Undetectable AI - StealthGPT, accessed November 7, 2025, https://www.stealthgpt.ai/blog/how-does-gemini-s-deep-research-work
- Gemini 2.5: Our most intelligent models are getting even better, accessed November 7, 2025, https://blog.google/technology/google-deepmind/google-gemini-updates-io-2025/
- Gemini Apps' release updates & improvements, accessed November 7, 2025, https://gemini.google/release-notes/
- Expanding Gemini 2.5 Flash and Pro capabilities | Google Cloud Blog, accessed November 7, 2025, https://cloud.google.com/blog/products/ai-machine-learning/expanding-gemini-2-5-flash-and-pro-capabilities
- Claude Opus 4.1 vs Gemini 2.5 Deep Think: The Ultimate 2025 AI Model Comparison, accessed November 7, 2025, https://binaryverseai.com/claude-opus-4-1-vs-gemini-2-5-deep-think/
- So when do you think Pro users will get access to Deep Think? : r/Bard - Reddit, accessed November 7, 2025, https://www.reddit.com/r/Bard/comments/1m67fpa/so_when_do_you_think_pro_users_will_get_access_to/
- Applications of Large Language Model Reasoning in Feature Generation - arXiv, accessed November 7, 2025, https://arxiv.org/html/2503.11989v1
- Gemini 2.5 Pro GENERAL AI AGENT: Stop Using MANUS, USE THIS FULLY FREE Alternative Instead! - YouTube, accessed November 7, 2025, https://www.youtube.com/watch?v=oJ_6N_xf6Kc
- Hands On AI - Manus, accessed November 7, 2025, https://manus.im/home
- Constitutional AI: Harmlessness from AI Feedback - Anthropic, accessed November 7, 2025, https://www-cdn.anthropic.com/7512771452629584566b6303311496c262da1006/Anthropic_ConstitutionalAI_v2.pdf
- [2212.08073] Constitutional AI: Harmlessness from AI Feedback - arXiv, accessed November 7, 2025, https://arxiv.org/abs/2212.08073
- Constitutional AI: Harmlessness from AI Feedback - Anthropic, accessed November 7, 2025, https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback
- Introducing Claude 4 - Anthropic, accessed November 7, 2025, https://www.anthropic.com/news/claude-4
- OpenAI and Anthropic try to fend off competition with new models, ideas for the U.S. AI Action Plan, and important new misalignment research - CSET, accessed November 7, 2025, https://cset.georgetown.edu/newsletter/march-20-2025/
- Claude Opus 4.1 - Anthropic, accessed November 7, 2025, https://www.anthropic.com/claude/opus
- O3 vs Claude Opus 4 vs Gemini 2.5 Pro: A Detailed Comparison - CometAPI - All AI Models in One API, accessed November 7, 2025, https://www.cometapi.com/o3-vs-claude-opus-4-vs-gemini-2-5-pro/
- Choosing AI Models in 2025: OpenAI vs Anthropic vs Google – A Practical Guide - Fanktank, accessed November 7, 2025, https://www.fanktank.ch/en/blog/choosing-ai-models-openai-anthropic-google-2025
- Manus AI Review 2025: Comparison with ChatGPT, Gemini, & Grok - McNeece, accessed November 7, 2025, https://www.mcneece.com/2025/03/manus-ai-review-2025-comparison-with-chatgpt-gemini-grok/
- How to Use Generative AI in Educational Research, accessed November 7, 2025, https://www.cambridge.org/core/elements/how-to-use-generative-ai-in-educational-research/916142E735B678F86A59240BFE651F5C
- Agentic Workflows for Economic Research: Design and Implementation - arXiv, accessed November 7, 2025, https://arxiv.org/html/2504.09736v1
- 7 Powerful Examples of Creative Problem Solving (2025) - Remote, accessed November 7, 2025, https://www.remotesparks.com/example-of-creative-problem-solving/
- PhD Thesis - University of the Highlands and Islands, accessed November 7, 2025, https://pure.uhi.ac.uk/files/14293529/Graham_Wilson_PhD.pdf
- NBER WORKING PAPER SERIES AI AGENTS FOR ECONOMIC RESEARCH Anton Korinek Working Paper 34202 http://www.nber.org/papers/w34202 NA, accessed November 7, 2025, https://www.nber.org/system/files/working_papers/w34202/w34202.pdf
- LiveResearchBench: A Live Benchmark for User-Centric Deep Research in the Wild - arXiv, accessed November 7, 2025, https://arxiv.org/html/2510.14240v1