MCP Memory Server

MCP Memory Server

Provides intelligent memory management capabilities using Qdrant vector database for semantic search and storage. Supports global, learned, and agent-specific memory types with markdown processing and duplicate detection.

Category
Visit Server

README

MCP Memory Server with Qdrant Vector Database

A Model Context Protocol (MCP) server that provides intelligent memory management capabilities using Qdrant vector database for semantic search and storage. Built specifically for Cursor IDE integration.

Features

🧠 Multiple Memory Types

  • Global Memory: Shared across all agents for common knowledge
  • Learned Memory: Lessons learned and mistakes to avoid
  • Agent-Specific Memory: Individual agent contexts and specialized knowledge

🔍 Semantic Search

  • Vector-based similarity search using sentence transformers
  • Duplicate detection to prevent redundant content
  • Configurable similarity thresholds

📝 Markdown Processing

  • Intelligent content cleaning and optimization
  • YAML front matter extraction
  • Section-based content organization

🔧 MCP Integration

  • Standard MCP protocol compliance for Cursor
  • stdin/stdout communication
  • Comprehensive tool set for memory operations

Architecture

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│                 │    │                  │    │                 │
│   Cursor IDE    │◄──►│  MCP Server      │◄──►│  Qdrant Vector  │
│                 │    │  (stdin/stdout)  │    │  Database       │
│                 │    │                  │    │                 │
└─────────────────┘    └──────────────────┘    └─────────────────┘
                                │
                                ▼
                       ┌──────────────────┐
                       │ Sentence         │
                       │ Transformers     │
                       │ (Embeddings)     │
                       └──────────────────┘

Installation

Prerequisites

  1. Python 3.10+ with pip
  2. Qdrant Database (can run locally with Docker)
  3. Cursor IDE for MCP integration

Setup Qdrant Database

Using Docker (recommended):

docker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant

Or install Qdrant locally following their installation guide.

Install Dependencies

Using Poetry (recommended):

# Install dependencies using Poetry
poetry install

Or using pip:

# Install Python dependencies
pip install -r requirements.txt

Configuration

  1. Copy the example environment file:
cp .env.example .env
  1. Edit .env with your settings:
# Qdrant Configuration
QDRANT_HOST=localhost
QDRANT_PORT=6333
QDRANT_API_KEY=

# Embedding Model Configuration  
EMBEDDING_MODEL=all-MiniLM-L6-v2
EMBEDDING_DIMENSION=384

# Memory Configuration
SIMILARITY_THRESHOLD=0.8
MAX_RESULTS=10

# Agent Configuration
DEFAULT_AGENT_ID=default

# Server Configuration
LOG_LEVEL=INFO

Usage

Starting the Server

python server.py

The server will:

  1. Connect to Qdrant database
  2. Initialize vector collections
  3. Load the embedding model
  4. Start listening for MCP commands via stdin/stdout

Cursor IDE Integration

Add the server to your Cursor MCP configuration:

{
  "mcpServers": {
    "memory-server": {
      "command": "/media/hannesn/storage/Code/MCP/.venv/bin/python",
      "args": ["/media/hannesn/storage/Code/MCP/server.py"],
      "cwd": "/media/hannesn/storage/Code/MCP",
      "env": {
        "PYTHONPATH": "/media/hannesn/storage/Code/MCP",
        "QDRANT_HOST": "localhost",
        "QDRANT_PORT": "6333",
        "EMBEDDING_MODEL": "all-MiniLM-L6-v2",
        "SIMILARITY_THRESHOLD": "0.8",
        "MAX_RESULTS": "10",
        "LOG_LEVEL": "INFO"
      }
    }
  }
}

Alternatively, you can run the server using Poetry:

poetry run python server.py

MCP Tools

1. set_agent_context

Initialize agent context from a markdown file.

Parameters:

  • agent_id (string): Unique identifier for the agent
  • context_file_path (string): Path to markdown file with agent context
  • description (string, optional): Description of the context

Example:

{
  "tool": "set_agent_context",
  "arguments": {
    "agent_id": "frontend_dev",
    "context_file_path": "./contexts/frontend_agent.md",
    "description": "Frontend development agent context"
  }
}

2. add_to_global_memory

Add content to global memory shared across all agents.

Parameters:

  • file_path (string): Path to markdown file
  • description (string, optional): Content description

Example:

{
  "tool": "add_to_global_memory", 
  "arguments": {
    "file_path": "./docs/coding_standards.md",
    "description": "Company coding standards"
  }
}

3. add_to_learned_memory

Store lessons learned to avoid repeated mistakes.

Parameters:

  • file_path (string): Path to markdown file with lessons
  • lesson_type (string): Type of lesson (e.g., "deployment", "security")
  • description (string, optional): Lesson description

Example:

{
  "tool": "add_to_learned_memory",
  "arguments": {
    "file_path": "./lessons/deployment_issues.md", 
    "lesson_type": "deployment",
    "description": "Critical deployment lessons"
  }
}

4. add_to_agent_memory

Add content to agent-specific memory.

Parameters:

  • agent_id (string): Target agent identifier
  • file_path (string): Path to markdown file
  • description (string, optional): Content description

Example:

{
  "tool": "add_to_agent_memory",
  "arguments": {
    "agent_id": "backend_dev",
    "file_path": "./docs/api_patterns.md",
    "description": "Backend API design patterns"
  }
}

5. query_memory

Search memory collections for relevant content.

Parameters:

  • query (string): Search query
  • memory_type (string): "global", "learned", "agent", or "all"
  • agent_id (string, optional): Agent ID for agent-specific queries
  • max_results (integer, optional): Maximum results (default: 10)

Example:

{
  "tool": "query_memory",
  "arguments": {
    "query": "authentication best practices",
    "memory_type": "all",
    "max_results": 5
  }
}

6. compare_against_learned_memory

Check proposed actions against past lessons learned.

Parameters:

  • action_description (string): Description of proposed action
  • agent_id (string, optional): Agent making the request

Example:

{
  "tool": "compare_against_learned_memory",
  "arguments": {
    "action_description": "Deploy database migration on Friday afternoon",
    "agent_id": "devops_agent"
  }
}

Memory Types Explained

Global Memory

  • Purpose: Store knowledge shared across all agents
  • Content: Coding standards, documentation, best practices
  • Access: All agents can query this memory
  • Use Case: Company-wide policies, architectural decisions

Learned Memory

  • Purpose: Store lessons learned from past mistakes
  • Content: Incident reports, post-mortems, anti-patterns
  • Access: Most agents (exclude "human-like" testers)
  • Use Case: Avoid repeating past mistakes, improve decisions

Agent-Specific Memory

  • Purpose: Store knowledge specific to individual agents
  • Content: Role definitions, specialized knowledge, context
  • Access: Only the specific agent
  • Use Case: Agent initialization, specialized expertise

Testing

Run Basic Functionality Tests

python tests/test_basic_functionality.py

This will test:

  • Qdrant connection and collection setup
  • Memory operations (add, query, duplicate detection)
  • Markdown processing and content cleaning
  • Vector embedding and similarity search

Manual Testing with Sample Data

  1. Start the server:
python server.py
  1. Use the provided sample markdown files in sample_data/:
    • frontend_agent_context.md: Frontend agent context
    • backend_agent_context.md: Backend agent context
    • deployment_lessons.md: Learned lessons
    • global_standards.md: Global development standards

Troubleshooting

Common Issues

Qdrant Connection Failed

❌ Failed to initialize Qdrant: ConnectionError
  • Ensure Qdrant is running on configured host/port
  • Check firewall settings
  • Verify API key if using Qdrant Cloud

Embedding Model Download Issues

❌ Failed to load embedding model
  • Ensure internet connection for first download
  • Check available disk space (models can be large)
  • Try alternative model in configuration

Memory Full / Performance Issues

  • Reduce EMBEDDING_DIMENSION for smaller models
  • Increase SIMILARITY_THRESHOLD to reduce results
  • Consider pruning old content from collections

Debugging

Enable debug logging:

export LOG_LEVEL=DEBUG
python server.py

Check Qdrant collections:

curl http://localhost:6333/collections

Configuration Reference

Environment Variables

Variable Default Description
QDRANT_HOST localhost Qdrant server host
QDRANT_PORT 6333 Qdrant server port
QDRANT_API_KEY API key for Qdrant Cloud
EMBEDDING_MODEL all-MiniLM-L6-v2 Sentence transformer model
EMBEDDING_DIMENSION 384 Vector dimension size
SIMILARITY_THRESHOLD 0.8 Duplicate detection threshold
MAX_RESULTS 10 Default max query results
DEFAULT_AGENT_ID default Default agent identifier
LOG_LEVEL INFO Logging verbosity

Collection Names

  • Global Memory: global_memory
  • Learned Memory: learned_memory
  • Agent Memory: agent_specific_memory_{agent_id}

Development

Project Structure

mcp-memory-server/
├── server.py                 # Main MCP server
├── src/
│   ├── __init__.py
│   ├── config.py             # Configuration management
│   ├── memory_manager.py     # Qdrant operations
│   └── markdown_processor.py # Markdown handling
├── tests/
│   └── test_basic_functionality.py
├── sample_data/              # Example markdown files
├── docs/                     # Additional documentation
├── requirements.txt          # Python dependencies
├── pyproject.toml           # Poetry configuration
└── README.md                # This file

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Run tests: python -m pytest tests/
  5. Submit a pull request

Adding New Tools

  1. Add tool function to MCPMemoryServer._register_tools()
  2. Update _list_tools() method with tool schema
  3. Add tests for the new functionality
  4. Update this README

License

MIT License - see LICENSE file for details.

Support

For issues and questions:

  1. Check the troubleshooting section above
  2. Review Qdrant documentation for database issues
  3. Check MCP protocol documentation for integration issues
  4. Open an issue with detailed logs and configuration

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured