mcp-deepcontext
MCP server enabling symbol-aware semantic search in Claude Code, allowing precise location of functions, types, and implementations via a symbol graph and embeddings.
README
mcp-deepcontext
MCP server enabling symbol-aware semantic search in Claude Code
motivation
Most code search tools treat code as text, ignoring the semantic structure that makes code meaningful. When you're working in a large codebase, you don't just want to find where a string appears. You want to find where a function is called, where a type is implemented, or what modules depend on a particular symbol.
This MCP server bridges that gap by exposing semantic code analysis capabilities to Claude. Instead of grepping for text, Claude can ask questions like "where is this interface implemented?" or "what are all the callers of this function?" and get accurate, symbol-aware results. This makes it possible to have actually useful conversations about unfamiliar codebases.
architecture
graph TD
A[Claude Desktop] -->|MCP Protocol| B[mcp-deepcontext Server]
B -->|Parse & Index| C[Symbol Database]
B -->|AST Analysis| D[TypeScript Compiler API]
B -->|Semantic Search| E[Vector Embeddings]
C -->|Symbol Locations| B
D -->|Type Information| B
E -->|Similarity Scores| B
F[Your Codebase] -->|Watch & Reindex| B
getting started
install
npm install -g mcp-deepcontext
quickstart
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"deepcontext": {
"command": "mcp-deepcontext",
"args": ["--workspace", "/path/to/your/project"]
}
}
}
Then restart Claude Desktop. The server will index your codebase on startup.
how it works
The server uses the TypeScript compiler API to build a complete symbol graph of your codebase. This includes type definitions, function signatures, class hierarchies, and import relationships. When Claude queries for information, the server can respond with precise locations and context.
For semantic search, the server generates embeddings for code symbols and their surrounding context. This allows for fuzzy matching on intent rather than exact text. The symbol database is kept in sync with file changes through a file watcher that triggers incremental re-indexing.
The MCP protocol exposes this through tools like search_symbols, find_references, get_definition, and find_implementations. Claude can chain these together to answer complex questions about code structure and relationships.
configuration
The server accepts these command-line arguments:
--workspace <path>: Root directory of the project to index (required)--languages <list>: Comma-separated list of languages to index (default:typescript,javascript)--exclude <patterns>: Glob patterns to exclude (default:node_modules,dist,build,.git)--max-file-size <bytes>: Skip files larger than this (default: 1MB)--embedding-model <name>: Model to use for embeddings (default:text-embedding-3-small)
Environment variables:
OPENAI_API_KEY: Required for embedding generationDEEPCONTEXT_LOG_LEVEL: Set todebugfor verbose logging
faq
Q: What languages are supported?
Currently TypeScript and JavaScript with full type awareness. Python and Go support is planned.
Q: Does this work with large codebases?
Yes, the indexing is incremental and the symbol database uses an efficient graph structure. Tested on codebases up to 500k lines without issues.
Q: How much does embedding generation cost?
For a typical 100k line codebase, initial indexing generates about 10k embeddings, costing roughly $0.02 with text-embedding-3-small. Incremental updates are much cheaper.
Q: Can I use this without embeddings?
Yes, pass --no-embeddings to disable semantic search. You'll still get all the symbol-aware tools like find references and go-to-definition.
Q: Does this send my code to external services?
Only the extracted symbol names and their immediate context are sent to OpenAI for embedding generation. You can disable this with --no-embeddings for completely local operation.
license
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.