Simple Memory Extension MCP Server
An MCP server that extends AI agents' context window by providing tools to store, retrieve, and search memories, allowing agents to maintain history and context across long interactions.
gmacev
README
Simple Memory Extension MCP Server
An MCP server to extend the context window / memory of agents. Useful when coding big features or vibe coding and need to store/recall progress, key moments or changes or anything worth remembering. Simply ask the agent to store memories and recall whenever you need or ask the agent to fully manage its memory (through cursor rules for example) however it sees fit.
Usage
Starting the Server
npm install
npm start
Available Tools
Context Item Management
store_context_item
- Store a value with key in namespaceretrieve_context_item_by_key
- Get value by keydelete_context_item
- Delete key-value pair
Namespace Management
create_namespace
- Create new namespacedelete_namespace
- Delete namespace and all contentslist_namespaces
- List all namespaceslist_context_item_keys
- List keys in a namespace
Semantic Search
retrieve_context_items_by_semantic_search
- Find items by meaning
Semantic Search Implementation
- Query converted to vector using E5 model
- Text automatically split into chunks for better matching
- Cosine similarity calculated between query and stored chunks
- Results filtered by threshold and sorted by similarity
- Top matches returned with full item values
Development
# Dev server
npm run dev
# Format code
npm run format
.env
# Path to SQLite database file
DB_PATH=./data/context.db
PORT=3000
# Use HTTP SSE or Stdio
USE_HTTP_SSE=true
# Logging Configuration: debug, info, warn, error
LOG_LEVEL=info
Semantic Search
This project includes semantic search capabilities using the E5 embedding model from Hugging Face. This allows you to find context items based on their meaning rather than just exact key matches.
Setup
The semantic search feature requires Python dependencies, but these should be automatically installed when you run: npm run start
Embedding Model
We use the intfloat/multilingual-e5-large-instruct
Notes
Developed mostly while vibe coding, so don't expect much :D. But it works, and I found it helpful so w/e. Feel free to contribute or suggest improvements.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor

Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.

Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.