
Arc Memory MCP Server
A bridge that exposes structured, verifiable context and query capabilities of a local Temporal Knowledge Graph to MCP-compatible AI agents, enabling them to access explicit project history and relationships rather than just semantic content.
README
Arc Memory MCP Server
The Arc Memory MCP Server is a bridge that exposes the structured, verifiable context and query capabilities of the local Arc Memory Temporal Knowledge Graph (TKG) to MCP-compatible clients (like AI agents in VS Code Agent Mode, Claude Desktop, Cursor, Windsurf, and other code generation agents).
Overview
Unlike typical RAG systems that rely solely on vector databases for semantic similarity, the Arc Memory MCP Server provides access to explicit, structured, temporal, and relational provenance data from the knowledge graph. It's about understanding the history and relationships (commits, PRs, issues, ADRs, file modifications), not just semantic content.
The Arc Memory MCP Server is a critical component of the Arc Memory Ecosystem, designed to be the memory layer for AI-assisted development. It serves as the Knowledge Graph (KG) access point in hybrid RAG systems within the developer workflow.
Architecture
The Arc Memory MCP Server sits at the center of the Arc Memory Ecosystem, connecting the Temporal Knowledge Graph to various AI assistants and development tools:
┌─────────────────────────────────────────────────────────────────────────┐
│ Arc Memory Ecosystem │
│ │
│ ┌───────────────┐ ┌─────────────────┐ │
│ │ Data Sources │ │ AI Assistants │ │
│ │ │ │ │ │
│ │ ┌─────────┐ │ │ ┌───────────┐ │ │
│ │ │ Git │ │ │ │ Claude │ │ │
│ │ └─────────┘ │ │ │ Desktop │ │ │
│ │ ┌─────────┐ │ │ └───────────┘ │ │
│ │ │ GitHub │ │ │ ┌───────────┐ │ │
│ │ └─────────┘ │ │ │ VS Code │ │ │
│ │ ┌─────────┐ │ ┌───────────────┐ │ │Agent Mode │ │ │
│ │ │ ADRs │──┼──────▶│ Arc Memory │ │ └───────────┘ │ │
│ │ └─────────┘ │ │ SDK │ │ ┌───────────┐ │ │
│ │ ┌─────────┐ │ │ (Knowledge │ │ │ Cursor │ │ │
│ │ │ Other │──┼──────▶│ Graph) │ │ └───────────┘ │ │
│ │ │ Sources │ │ └───────┬───────┘ │ ┌───────────┐ │ │
│ │ └─────────┘ │ │ │ │ Windsurf │ │ │
│ └───────────────┘ │ │ └───────────┘ │ │
│ │ │ ┌───────────┐ │ │
│ │ │ │ Other │ │ │
│ │ │ │ MCP │ │ │
│ │ │ │ Clients │ │ │
│ ▼ │ └───────────┘ │ │
│ ┌───────────────────┐ └────────┬────────┘ │
│ │ Arc Memory MCP │ │ │
│ │ Server │◀───────────────┘ │
│ │ │ │
│ │ ┌───────────────┐ │ │
│ │ │arc_trace_ │ │ │
│ │ │history │ │ │
│ │ └───────────────┘ │ │
│ │ ┌───────────────┐ │ │
│ │ │arc_get_entity_│ │ │
│ │ │details │ │ │
│ │ └───────────────┘ │ │
│ │ ┌───────────────┐ │ │
│ │ │arc_find_ │ │ │
│ │ │related_ │ │ │
│ │ │entities │ │ │
│ │ └───────────────┘ │ │
│ │ ┌───────────────┐ │ │
│ │ │arc_blame_line │ │ │
│ │ └───────────────┘ │ │
│ └───────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
The diagram shows how:
- Data Sources (Git, GitHub, ADRs, etc.) are processed by the Arc Memory SDK to build the Temporal Knowledge Graph
- The Arc Memory MCP Server exposes this knowledge graph through standardized MCP tools
- AI Assistants (Claude Desktop, VS Code Agent Mode, Cursor, Windsurf, etc.) connect to the server to access the knowledge graph
- This enables AI assistants to provide context-aware assistance grounded in the project's actual history and decisions
Features
The server implements the following MCP tools using the latest MCP SDK (1.6.0) with enhanced error handling and context management:
- arc_trace_history: Traces the decision history for a specific line in a file
- arc_get_entity_details: Retrieves detailed information about a specific entity
- arc_find_related_entities: Finds entities directly connected to a given entity
- arc_blame_line: Gets the specific commit SHA, author, and date for a line
Requirements
- Python 3.10 or higher
mcp
Python SDK (>=1.6.0)arc-memory
Python package (>=0.2.2)
Installation
We recommend using uv as the package manager for faster, more reliable Python package management.
- Install uv (if not already installed):
# macOS
brew install uv
# Linux/WSL
curl -LsSf https://astral.sh/uv/install.sh | sh
- Install the required packages:
# Using uv (recommended)
uv pip install mcp arc-memory
# Or using pip
pip install mcp arc-memory
- Clone this repository:
git clone https://github.com/Arc-Computer/arc-mcp-server.git
cd arc-mcp-server
- Install the server:
# Using uv (recommended)
uv pip install -e .
# Or using pip
pip install -e .
Usage
Prerequisites
Before using the server, make sure you have:
-
Installed the
arc-memory
SDK (version 0.2.2 or higher)pip install arc-memory>=0.2.2
-
Authenticated with GitHub using
arc auth gh
(if you have GitHub OAuth credentials)arc auth gh
-
Built the knowledge graph using
arc build
arc build
This will build a knowledge graph from your local Git repository, including commits, files, and relationships between them. The database will be stored at
~/.arc/graph.db
.
Running the Server
Run the server using:
python src/arc_mcp_server.py
The server uses the stdio
transport mechanism, which means it's designed to be launched by an MCP client (like Claude Desktop or VS Code Agent Mode).
Testing
To test the server, you can use the provided test script:
python tests/test.py
This will start the server and run a series of tests to verify that all tools are working correctly.
Testing with Mock Data
If you don't have a local Arc Memory database set up yet, you can test the server with mock data:
-
Edit
tests/test.py
to use the mock server:# Comment out the real server path # server_path = Path(__file__).parent.parent / "src" / "arc_mcp_server.py" # Uncomment the mock server path server_path = Path(__file__).parent / "mock_server.py"
-
Run the test script:
python tests/test.py
This will use the mock server, which returns predefined data instead of querying the actual database.
Testing with Real Data
The server has been successfully tested with a real Arc Memory database built from this repository. The database includes:
- Git commit history
- File relationships
- Architecture Decision Records (ADRs)
- Github Commits and PRs
When building your own knowledge graph with arc build
, the system will automatically detect and include ADR files matching the pattern **/adr/**/*.md
.
Integration with Claude Desktop
To use the server with Claude Desktop:
-
Open your Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
-
Add the server configuration using uv (recommended):
{
"mcpServers": {
"arc-memory": {
"command": "uv",
"args": [
"run",
"python",
"/absolute/path/to/src/arc_mcp_server.py"
]
}
}
}
Alternatively, you can use the fastmcp
CLI to install the server directly:
# Install fastmcp if not already installed
uv pip install fastmcp
# Install the server in Claude Desktop
fastmcp install /absolute/path/to/src/arc_mcp_server.py --name "Arc Memory"
- Restart Claude Desktop
Integration with VS Code Agent Mode
To use the server with VS Code Agent Mode:
-
Install the VS Code Agent Mode extension
-
Configure the MCP server in your VS Code settings:
"anthropic.agent-mode.mcp.servers": {
"arc-memory": {
"command": "uv",
"args": [
"run",
"python",
"/absolute/path/to/src/arc_mcp_server.py"
]
}
}
Integration with Cursor
To use the server with Cursor:
- Open Cursor settings and navigate to the AI settings
- Configure the MCP server in your Cursor settings (similar to VS Code configuration)
- Restart Cursor
Integration with Windsurf
To use the server with Windsurf, follow the Windsurf documentation for configuring MCP servers.
Tool Documentation
arc_trace_history
Traces the decision history (provenance) for a specific line number within a file path.
Parameters:
file_path
: Path to the file, relative to the repository rootline_number
: 1-based line number within the filemax_hops
(optional): Maximum number of hops in the graph traversal (default: 2)max_results
(optional): Maximum number of results to return (default: 3)
Returns: JSON string representing a list of entity summaries.
arc_get_entity_details
Retrieves detailed information about a specific entity from the Arc Memory TKG.
Parameters:
entity_id
: The unique ID of the entity (e.g., 'commit:abc123', 'pr:42', 'file:src/main.py')
Returns: JSON string representing the detailed entity object.
arc_find_related_entities
Finds entities directly connected to a given entity ID in the Arc Memory TKG.
Parameters:
entity_id
: The unique ID of the starting entityrelationship_type
(optional): Filter by relationship type ('MODIFIES', 'MENTIONS', 'MERGES', 'DECIDES')direction
(optional): Relationship direction ('outgoing', 'incoming', 'both'). Default 'both'max_results
(optional): Maximum number of related entities to return (default: 10)
Returns: JSON string representing a list of related entity summaries.
arc_blame_line
Gets the specific commit SHA, author, and date for the last modification of a given file and line number.
Parameters:
file_path
: Path to the file, relative to the repository rootline_number
: 1-based line number within the file
Returns: JSON string representing the commit SHA, author, and date.
Error Handling
All tools return structured JSON errors in case of failure, with an error
field containing the error message.
Use Cases
The Arc Memory MCP Server enables a variety of powerful use cases for AI-assisted development:
1. Code Understanding with Historical Context
When an AI assistant is asked to explain a piece of code, it can use the Arc Memory MCP Server to:
- Trace the history of the code using
arc_trace_history
- Understand when and why the code was written
- Reference the PR discussions and issues that led to the code's creation
- Provide explanations grounded in the actual development history
Example prompt:
"Why was this authentication logic implemented this way? It seems complex."
2. Intelligent Code Reviews
AI assistants can provide more insightful code reviews by:
- Using
arc_blame_line
to identify who wrote specific parts of the code - Referencing related PRs and issues using
arc_find_related_entities
- Understanding the historical context and design decisions
- Suggesting improvements that align with the project's established patterns
Example prompt:
"Review this PR and highlight any inconsistencies with our established patterns."
3. Decision Archaeology
When developers need to understand past decisions, the AI can:
- Trace the history of a file or specific line
- Find related ADRs (Architecture Decision Records)
- Connect issues, PRs, and commits to provide a complete picture
- Explain the reasoning behind specific design choices
Example prompt:
"Why did we choose this database schema? What alternatives were considered?"
4. Contextual Code Generation
AI code generation becomes more aligned with project standards when:
- The AI can reference similar patterns in the codebase
- It understands the project's history and evolution
- It can ground suggestions in actual project decisions
- It can cite specific examples from the project's history
Example prompt:
"Generate a new API endpoint following our established patterns for error handling."
5. Knowledge Transfer for New Team Members
New developers can get up to speed faster when:
- They can ask about the history and reasoning behind code
- The AI can provide contextual explanations based on actual project history
- They can understand design decisions without having to track down team members
- They can learn project patterns with historical context
Example prompt:
"I'm new to the team. Can you explain the authentication flow and why it was designed this way?"
Current Status
The Arc Memory MCP Server has been successfully implemented and tested with both mock data and real Arc Memory databases. All four tools are functioning correctly and can be integrated with various MCP clients.
License
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.