DeepResearch MCP

DeepResearch MCP

A powerful research assistant that conducts intelligent, iterative research through web searches, analysis, and comprehensive report generation on any topic.

Category
Visit Server

Tools

initialize-research

execute-research-step

generate-report

complete-research

README

DeepResearch MCP

<div align="center">

DeepResearch Logo TypeScript OpenAI Node.js

</div>

📚 Overview

DeepResearch MCP is a powerful research assistant built on the Model Context Protocol (MCP). It conducts intelligent, iterative research on any topic through web searches, analysis, and comprehensive report generation.

🌟 Key Features

  • Intelligent Topic Exploration - Automatically identifies knowledge gaps and generates focused search queries
  • Comprehensive Content Extraction - Enhanced web scraping with improved content organization
  • Structured Knowledge Processing - Preserves important information while managing token usage
  • Scholarly Report Generation - Creates detailed, well-structured reports with executive summaries, analyses, and visualizations
  • Complete Bibliography - Properly cites all sources with numbered references
  • Adaptive Content Management - Automatically manages content to stay within token limits
  • Error Resilience - Recovers from errors and generates partial reports when full processing isn't possible

🛠️ Architecture

<div align="center">

┌────────────────────┐     ┌─────────────────┐     ┌────────────────┐
│                    │     │                 │     │                │
│  MCP Server Layer  ├────►│ Research Service├────►│ Search Service │
│  (Tools & Prompts) │     │ (Session Mgmt)  │     │  (Firecrawl)   │
│                    │     │                 │     │                │
└────────────────────┘     └─────────┬───────┘     └────────────────┘
                                     │
                                     ▼
                           ┌─────────────────┐
                           │                 │
                           │  OpenAI Service │
                           │ (Analysis/Rpt)  │
                           │                 │
                           └─────────────────┘

</div>

💻 Installation

Prerequisites

  • Node.js 18 or higher
  • OpenAI API key
  • Firecrawl API key

Setup Steps

  1. Clone the repository

    git clone <repository-url>
    cd deep-research-mcp
    
  2. Install dependencies

    npm install
    
  3. Configure environment variables

    cp .env.example .env
    

    Edit the .env file and add your API keys:

    OPENAI_API_KEY=sk-your-openai-api-key
    FIRECRAWL_API_KEY=your-firecrawl-api-key
    
  4. Build the project

    npm run build
    

🚀 Usage

Running the MCP Server

Start the server on stdio for MCP client connections:

npm start

Using the Example Client

Run research on a specific topic with a specified depth:

npm run client "Your research topic" 3

Parameters:

  • First argument: Research topic or query
  • Second argument: Research depth (number of iterations, default: 2)
  • Third argument (optional): "complete" to use the complete-research tool (one-step process)

Example:

npm run client "the impact of climate change on coral reefs" 3 complete

Example Output

The DeepResearch MCP will produce a comprehensive report that includes:

  • Executive Summary - Concise overview of the research findings
  • Introduction - Context and importance of the research topic
  • Methodology - Description of the research approach
  • Comprehensive Analysis - Detailed examination of the topic
  • Comparative Analysis - Visual comparison of key aspects
  • Discussion - Interpretation of findings and implications
  • Limitations - Constraints and gaps in the research
  • Conclusion - Final insights and recommendations
  • Bibliography - Complete list of sources with URLs

🔧 MCP Integration

Available MCP Resources

Resource Path Description
research://state/{sessionId} Access the current state of a research session
research://findings/{sessionId} Access the collected findings for a session

Available MCP Tools

Tool Name Description Parameters
initialize-research Start a new research session query: string, depth: number
execute-research-step Execute the next research step sessionId: string
generate-report Create a final report sessionId: string, timeout: number (optional)
complete-research Execute the entire research process query: string, depth: number, timeout: number (optional)

🖥️ Claude Desktop Integration

DeepResearch MCP can be integrated with Claude Desktop to provide direct research capabilities to Claude.

Configuration Steps

  1. Copy the sample configuration

    cp claude_desktop_config_sample.json ~/path/to/claude/desktop/config/directory/claude_desktop_config.json
    
  2. Edit the configuration file

    Update the path to point to your installation of deep-research-mcp and add your API keys:

    {
      "mcpServers": {
        "deep-research": {
          "command": "node",
          "args": [
            "/absolute/path/to/your/deep-research-mcp/dist/index.js"
          ],
          "env": {
            "FIRECRAWL_API_KEY": "your-firecrawler-api-key",
            "OPENAI_API_KEY": "your-openai-api-key"
          }
        }
      }
    }
    
  3. Restart Claude Desktop

    After saving the configuration, restart Claude Desktop for the changes to take effect.

  4. Using with Claude Desktop

    Now you can ask Claude to perform research using commands like:

    Can you research the impact of climate change on coral reefs and provide a detailed report?
    

📋 Sample Client Code

import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";

async function main() {
  // Connect to the server
  const transport = new StdioClientTransport({
    command: "node",
    args: ["dist/index.js"]
  });

  const client = new Client({ name: "deep-research-client", version: "1.0.0" });
  await client.connect(transport);

  // Initialize research
  const initResult = await client.callTool({
    name: "initialize-research",
    arguments: {
      query: "The impact of artificial intelligence on healthcare",
      depth: 3
    }
  });
  
  // Parse the response to get sessionId
  const { sessionId } = JSON.parse(initResult.content[0].text);
  
  // Execute steps until complete
  let currentDepth = 0;
  while (currentDepth < 3) {
    const stepResult = await client.callTool({
      name: "execute-research-step",
      arguments: { sessionId }
    });
    
    const stepInfo = JSON.parse(stepResult.content[0].text);
    currentDepth = stepInfo.currentDepth;
    
    console.log(`Completed step ${stepInfo.currentDepth}/${stepInfo.maxDepth}`);
  }
  
  // Generate final report with timeout
  const report = await client.callTool({
    name: "generate-report",
    arguments: { 
      sessionId,
      timeout: 180000 // 3 minutes timeout
    }
  });
  
  console.log("Final Report:");
  console.log(report.content[0].text);
}

main().catch(console.error);

🔍 Troubleshooting

Common Issues

  • Token Limit Exceeded: For very large research topics, you may encounter OpenAI token limit errors. Try:

    • Reducing the research depth
    • Using more specific queries
    • Breaking complex topics into smaller sub-topics
  • Timeout Errors: For complex research, the process may time out. Solutions:

    • Increase the timeout parameters in tool calls
    • Use the complete-research tool with a longer timeout
    • Process research in smaller chunks
  • API Rate Limits: If you encounter rate limit errors from OpenAI or Firecrawl:

    • Implement a delay between research steps
    • Use an API key with higher rate limits
    • Retry with exponential backoff

📝 License

ISC

🙏 Acknowledgements

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured