Context Engine MCP Server
A local-first, agent-agnostic MCP server that provides semantic search, persistent memory, and automated code review capabilities for development workflows. It leverages the Auggie SDK to offer advanced tools for codebase indexing, implementation planning, and deterministic static analysis.
README
Context Engine MCP Server
A local-first, agent-agnostic Model Context Protocol (MCP) server implementation using the Auggie SDK as the core context engine.
š New here? Check out INDEX.md for a complete documentation guide!
š Quick Start: QUICKSTART.md ā GETTING_STARTED.md ā API_REFERENCE.md
šļø Architecture: TECHNICAL_ARCHITECTURE.md for deep technical dive
Architecture
This implementation follows a clean 5-layer architecture as outlined in plan.md:
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Coding Agents (Clients) ā Layer 4: Claude, Cursor, etc.
ā Codex | Claude | Cursor ā
āāāāāāāāāāāāāā²āāāāāāāāāāāāāāāā
ā MCP (tools)
āāāāāāāāāāāāāā“āāāāāāāāāāāāāāāā
ā MCP Interface Layer ā Layer 3: server.ts, tools/
ā (standardized tool API) ā
āāāāāāāāāāāāāā²āāāāāāāāāāāāāāāā
ā internal API
āāāāāāāāāāāāāā“āāāāāāāāāāāāāāāā
ā Context Service Layer ā Layer 2: serviceClient.ts
ā (query orchestration) ā
āāāāāāāāāāāāāā²āāāāāāāāāāāāāāāā
ā domain calls
āāāāāāāāāāāāāā“āāāāāāāāāāāāāāāā
ā Core Context Engine ā Layer 1: Auggie SDK
ā (indexing, retrieval) ā
āāāāāāāāāāāāāā²āāāāāāāāāāāāāāāā
ā storage
āāāāāāāāāāāāāā“āāāāāāāāāāāāāāāā
ā Storage / Index Backend ā Layer 5: Auggie's internal
ā (vectors, metadata) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Layer Responsibilities
- Layer 1 (Core Engine): Auggie SDK handles file ingestion, chunking, embedding, and semantic retrieval
- Layer 2 (Service): Orchestrates context, formats snippets, deduplicates, enforces limits
- Layer 3 (MCP Interface): Exposes tools, validates I/O, maps calls to service layer
- Layer 4 (Agents): Consume context and generate responses
- Layer 5 (Storage): Persists embeddings and metadata
Features
MCP Tools (41 tools available)
Core Context Tools (10)
index_workspace(force?)- Index workspace files for semantic searchforce(optional): Force re-indexing even if files haven't changed
codebase_retrieval(query, top_k?)- PRIMARY semantic search with JSON output for programmatic usequery: Natural language search querytop_k(optional): Number of results to return (default: 5)
semantic_search(query, top_k?, mode?, bypass_cache?, timeout_ms?)- Semantic code search with markdown-formatted outputquery: Natural language search querytop_k(optional): Number of results to return (default: 5)mode(optional):"fast"(default) or"deep"for higher recall at higher latencybypass_cache(optional): When true, bypass caches for this calltimeout_ms(optional): Cap time spent in retrieval pipeline (ms)
get_file(path)- Retrieve complete file contentspath: Relative path to file from workspace root
get_context_for_prompt(query, max_files?, token_budget?, include_related?, min_relevance?, bypass_cache?)- Get comprehensive context bundle for prompt enhancementquery: Context request descriptionmax_files(optional): Maximum files to include (default: 5)token_budget(optional): Token budget for the bundle (default: 8000)include_related(optional): Include related/imported files (default: true)min_relevance(optional): Minimum relevance score (default: 0.3)bypass_cache(optional): When true, bypass caches for this call
enhance_prompt(prompt)- AI-powered prompt enhancement with codebase contextprompt: Simple prompt to enhance
index_status()- View index health metadata (status, fileCount, lastIndexed, isStale)reindex_workspace()- Clear and rebuild the entire index from scratchclear_index()- Remove index state without rebuildingtool_manifest()- Discovery tool for available capabilities
Memory System (2)
add_memory(category, content, title?)- Store persistent memories for future sessionscategory: 'preferences', 'decisions', or 'facts'content: The memory content to store (max 5000 characters)title(optional): Title for the memory
list_memories(category?)- List all stored memoriescategory(optional): Filter to a specific category
Planning & Execution (4)
create_plan(task, options?)- Generate structured execution plans with DAG analysistask: Task or goal to plan forgenerate_diagrams(optional): Generate Mermaid diagrams (default: true)
refine_plan(current_plan, feedback?, clarifications?)- Refine existing plans based on feedbackvisualize_plan(plan, diagram_type?)- Generate visual representations (Mermaid diagrams)execute_plan(plan, ...)- Execute plan steps with AI-powered code generation
Plan Management (13)
save_plan(plan, name?, tags?, overwrite?)- Save plans to persistent storageload_plan(plan_id \| name)- Load previously saved planslist_plans(status?, tags?, limit?)- List saved plans with filteringdelete_plan(plan_id)- Delete saved plans from storagerequest_approval(plan_id, step_numbers?)- Create approval requests for plans or specific stepsrespond_approval(request_id, action, comments?)- Respond to approval requestsstart_step(plan_id, step_number)- Mark a step as in-progresscomplete_step(plan_id, step_number, notes?, files_modified?)- Mark a step as completedfail_step(plan_id, step_number, error, ...)- Mark a step as failedview_progress(plan_id)- View execution progress and statisticsview_history(plan_id, limit?, include_plans?)- View version history of a plancompare_plan_versions(plan_id, from_version, to_version)- Generate diff between versionsrollback_plan(plan_id, version, reason?)- Rollback to a previous plan version
Code Review (5)
review_changes(diff, file_contexts?, options?)- AI-powered code review with structured outputreview_git_diff(target?, base?, include_patterns?, options?)- Review code changes from git automaticallyreview_diff(diff, changed_files?, options?)- Enterprise review with risk scoring and static analysis- Risk scoring (1-5) based on deterministic preflight
- Change classification (feature/bugfix/refactor/infra/docs)
- Optional static analysis (TypeScript, Semgrep)
- Per-phase timing telemetry
check_invariants(diff, changed_files?, invariants_path?)- Run YAML invariants deterministically (no LLM)run_static_analysis(changed_files?, options?)- Run local static analyzers (tsc, semgrep)
Reactive Review (7)
reactive_review_pr(...)- Start a session-based, parallelized code reviewget_review_status(session_id)- Track progress of a reactive reviewpause_review(session_id)- Pause a running review sessionresume_review(session_id)- Resume a paused sessionget_review_telemetry(session_id)- Detailed metrics (tokens, speed, cache hits)scrub_secrets(content)- Mask API keys and sensitive datavalidate_content(content, content_type, ...)- Multi-tier validation for AI-generated content
Key Characteristics
- ā Local-first: No cloud dependencies, no exposed ports, no data leakage
- ā Agent-agnostic: Works with any MCP-compatible coding agent
- ā LLM-agnostic: No LLM-specific logic in the engine
- ā Storage-agnostic: Auggie SDK handles storage abstraction
- ā Extensible: Clean separation allows easy feature additions
- ā Real-time watching: Automatic incremental indexing on file changes (v1.1.0)
- ā Background indexing: Non-blocking indexing via worker threads (v1.1.0)
- ā Offline policy: Enforce local-only operation with environment variable (v1.1.0)
- ā Planning mode: AI-powered implementation planning with DAG analysis (v1.4.0)
- ā Execution tracking: Step-by-step execution with dependency management (v1.4.0)
- ā Version control: Plan versioning with diff and rollback support (v1.4.0)
- ā Approval workflows: Built-in approval system for plans and steps (v1.4.0)
- ā Defensive programming: Comprehensive null/undefined handling (v1.4.1)
- ā Cross-session memory: Persistent memory system for preferences, decisions, and facts (v1.4.1)
- ā AI-powered code review: Structured code review with confidence scoring and priority levels (v1.7.0)
- ā Git integration: Automatic diff retrieval for staged, unstaged, branch, and commit changes (v1.7.0)
- ā Reactive Optimization: 180-600x faster reactive reviews via AI Agent Executor, Multi-layer Caching, Batching, and Worker Pool Optimization (v1.8.0)
- ā High Availability: Circuit breakers, adaptive timeouts, and zombie session detection (v1.8.0)
- ā Static analysis integration: Optional TypeScript and Semgrep analyzers for deterministic feedback (v1.9.0)
- ā Invariants checking: YAML-based custom rules for deterministic code review (v1.9.0)
- ā Per-phase telemetry: Detailed timing breakdowns for review pipeline optimization (v1.9.0)
Reactive Review Optimizations (v1.8.0)
Version 1.8.0 introduces massive performance improvements to the reactive code review system, reducing review times from 30-50 minutes to 3-15 seconds for typical PRs.
Optimization Stack
| Phase | Feature | Performance Gain | Description |
|---|---|---|---|
| Phase 1 | AI Agent Executor | 15-50x | Executes reviews directly via the AI agent instead of external API calls. |
| Phase 2 | Multi-Layer Cache | 2-4x (cached) | 3-layer system: Memory (fastest) -> Commit (git-aware) -> File Hash (content-based). |
| Phase 3 | Continuous Batching | 2-3x | Accumulates and processes multiple files in a single AI request. |
| Phase 4 | Worker Pool Optimization | 1.5-2x | CPU-aware parallel execution with intelligent load balancing. |
Total Performance Improvement
| Scenario | v1.7.1 | v1.8.0 | Improvement |
|---|---|---|---|
| Cold Run (10 steps) | 30-50 min | ~60-90 sec | 25-45x ā” |
| Cached Run | 30-50 min | ~10-30 sec | 60-180x ā” |
| Batched Run | 30-50 min | ~5-15 sec | 120-360x ā” |
| Full Optimization | 30-50 min | 3-10 sec | 180-600x š |
Static Analysis & Invariants (v1.9.0)
Version 1.9.0 introduces optional static analysis and deterministic invariants checking for enhanced code review capabilities.
Static Analysis Features
| Analyzer | Description | Opt-in |
|---|---|---|
| TypeScript | Type checking via tsc --noEmit |
Default |
| Semgrep | Pattern-based security/quality checks | Optional (requires installation) |
Usage
Enable Static Analysis in review_diff
review_diff({
diff: "<unified diff>",
changed_files: ["src/file.ts"],
options: {
enable_static_analysis: true,
static_analyzers: ["tsc", "semgrep"],
static_analysis_timeout_ms: 60000
}
})
Run Static Analysis Standalone
run_static_analysis({
changed_files: ["src/file.ts"],
options: {
analyzers: ["tsc", "semgrep"],
timeout_ms: 60000,
max_findings_per_analyzer: 20
}
})
Check Custom Invariants
check_invariants({
diff: "<unified diff>",
changed_files: ["src/file.ts"],
invariants_path: ".review-invariants.yml"
})
Invariants Configuration
Create .review-invariants.yml in your workspace root:
invariants:
- id: no-console-log
pattern: "console\\.log"
message: "Remove console.log statements before committing"
severity: MEDIUM
- id: no-todo-comments
pattern: "TODO|FIXME"
message: "Resolve TODO/FIXME comments"
severity: LOW
- id: require-error-handling
pattern: "catch\\s*\\(\\s*\\)"
message: "Empty catch blocks should log or handle errors"
severity: HIGH
Benefits
- ā Deterministic: No LLM required for invariants/static analysis
- ā Fast: Local execution, no API calls
- ā CI-Friendly: Structured JSON output suitable for CI/CD pipelines
- ā Customizable: YAML-based rules, configurable analyzers
- ā Opt-in: Disabled by default, enable as needed
Per-Phase Telemetry
The review_diff tool now reports detailed timing breakdowns in stats.timings_ms:
{
"stats": {
"timings_ms": {
"preflight": 45,
"invariants": 12,
"static_analysis": 3200,
"context_fetch": 890,
"secrets_scrub": 5,
"llm_structural": 1200,
"llm_detailed": 2400
}
}
}
This allows you to:
- Identify performance bottlenecks in the review pipeline
- Optimize timeout settings for your workflow
- Monitor static analysis overhead
- Track LLM usage patterns
Planning Workflow (v1.4.0+)
The Context Engine now includes a complete planning and execution system:
1. Create a Plan
create_plan({
task: "Implement user authentication with JWT tokens",
generate_diagrams: true
})
2. Save the Plan
save_plan({
plan: "<plan JSON>",
name: "JWT Authentication",
tags: ["auth", "security"]
})
3. Execute Step-by-Step
// Start a step
start_step({ plan_id: "plan_abc123", step_number: 1 })
// Complete it
complete_step({
plan_id: "plan_abc123",
step_number: 1,
notes: "Created User model"
})
// Check progress
view_progress({ plan_id: "plan_abc123" })
4. Track History
// View version history
view_history({ plan_id: "plan_abc123" })
// Compare versions
compare_plan_versions({
plan_id: "plan_abc123",
from_version: 1,
to_version: 2
})
// Rollback if needed
rollback_plan({ plan_id: "plan_abc123", version: 1 })
See EXAMPLES.md for complete planning workflow examples.
Memory System (v1.4.1)
The Context Engine includes a cross-session memory system that persists preferences, decisions, and project facts across sessions.
Memory Categories
| Category | Purpose | Examples |
|---|---|---|
preferences |
Coding style and tool preferences | "Prefer TypeScript strict mode", "Use Jest for testing" |
decisions |
Architecture and design decisions | "Chose JWT over sessions", "Using PostgreSQL" |
facts |
Project facts and environment info | "API runs on port 3000", "Uses monorepo structure" |
Adding Memories
// Store a preference
add_memory({
category: "preferences",
content: "Prefers functional programming patterns over OOP"
})
// Store an architecture decision with a title
add_memory({
category: "decisions",
title: "Authentication Strategy",
content: "Chose JWT with refresh tokens for stateless authentication. Sessions were considered but rejected due to horizontal scaling requirements."
})
// Store a project fact
add_memory({
category: "facts",
content: "The API uses PostgreSQL 15 with pgvector extension for embeddings"
})
Automatic Memory Retrieval
Memories are automatically included in get_context_for_prompt results when relevant:
// Memories are retrieved alongside code context
const context = await get_context_for_prompt({
query: "How should I implement authentication?"
})
// Returns: code context + relevant memories about auth decisions
Memory Files
Memories are stored in .memories/ as markdown files:
preferences.md- Coding style preferencesdecisions.md- Architecture decisionsfacts.md- Project facts
These files are human-editable and can be version controlled with Git.
Prerequisites
- Node.js 18+
- Auggie CLI - Install globally:
npm install -g @augmentcode/auggie - Authentication - Run
auggie loginor set environment variables:export AUGMENT_API_TOKEN="your-token" export AUGMENT_API_URL="https://api.augmentcode.com"
Installation
# Clone or navigate to the repository
cd context-engine
# Install dependencies
npm install
# Build the project
npm run build
Usage
Standalone Mode
Using the Management Script (Windows)
For Windows users, a convenient batch file is provided for managing the server:
# Start the server with indexing and file watching
manage-server.bat start
# Check server status
manage-server.bat status
# Restart the server
manage-server.bat restart
# Stop the server
manage-server.bat stop
The management script automatically:
- Uses the current directory as workspace
- Enables indexing (
--index) - Enables file watching (
--watch) - Logs output to
.server.log - Tracks the process ID in
.server.pid
Manual Start (All Platforms)
# Start server with current directory
node dist/index.js
# Start with specific workspace
node dist/index.js --workspace /path/to/project
# Index workspace before starting
node dist/index.js --workspace /path/to/project --index
# Enable file watcher for automatic incremental indexing (v1.1.0)
node dist/index.js --workspace /path/to/project --watch
CLI Options
| Option | Alias | Description |
|---|---|---|
--workspace <path> |
-w |
Workspace directory to index (default: current directory) |
--index |
-i |
Index the workspace before starting server |
--watch |
-W |
Enable filesystem watcher for incremental indexing |
--http |
- | Enable HTTP server (in addition to stdio) |
--http-only |
- | Enable HTTP server only (for VS Code integration) |
--port <port> |
-p |
HTTP server port (default: 3333) |
--help |
-h |
Show help message |
With Codex CLI
-
Build the project:
npm run build -
Add the MCP server to Codex CLI:
codex mcp add context-engine -- node /absolute/path/to/context-engine/dist/index.js --workspace /path/to/your/projectOr edit
~/.codex/config.tomldirectly:[mcp_servers.context-engine] command = "node" args = [ "/absolute/path/to/context-engine/dist/index.js", "--workspace", "/path/to/your/project" ] -
Restart Codex CLI
-
Type
/mcpin the TUI to verify the server is connected
With Other MCP Clients (Antigravity, Claude Desktop, Cursor)
For other MCP clients, add this server to your client's MCP configuration:
{
"mcpServers": {
"context-engine": {
"command": "node",
"args": [
"/absolute/path/to/context-engine/dist/index.js",
"--workspace",
"/path/to/your/project"
]
}
}
}
See QUICKSTART.md - Step 5B for detailed instructions for each client.
Development
# Watch mode for development
npm run dev
# Build for production
npm run build
# Run the server
npm start
Project Structure
context-engine/
āāā src/
ā āāā index.ts # Entry point with CLI parsing
ā āāā mcp/
ā ā āāā server.ts # MCP server implementation
ā ā āāā serviceClient.ts # Context service layer
ā ā āāā tools/
ā ā ā āāā index.ts # index_workspace tool
ā ā ā āāā search.ts # semantic_search tool
ā ā ā āāā file.ts # get_file tool
ā ā ā āāā context.ts # get_context_for_prompt tool
ā ā ā āāā enhance.ts # enhance_prompt tool
ā ā ā āāā status.ts # index_status tool (v1.1.0)
ā ā ā āāā lifecycle.ts # reindex/clear tools (v1.1.0)
ā ā ā āāā manifest.ts # tool_manifest tool (v1.1.0)
ā ā ā āāā plan.ts # Planning tools (v1.4.0)
ā ā ā āāā planManagement.ts # Plan persistence/workflow tools (v1.4.0)
ā ā āāā services/ # Business logic services (v1.4.0)
ā ā ā āāā planningService.ts # Plan generation, DAG analysis
ā ā ā āāā planPersistenceService.ts # Save/load/list plans
ā ā ā āāā approvalWorkflowService.ts # Approval request handling
ā ā ā āāā executionTrackingService.ts # Step progress tracking
ā ā ā āāā planHistoryService.ts # Version history, rollback
ā ā āāā types/ # TypeScript type definitions (v1.4.0)
ā ā ā āāā planning.ts # Planning-related types
ā ā āāā prompts/ # AI prompt templates (v1.4.0)
ā ā āāā planning.ts # Planning system prompts
ā āāā watcher/ # File watching (v1.1.0)
ā ā āāā FileWatcher.ts # Core watcher logic
ā ā āāā types.ts # Event types
ā ā āāā index.ts # Exports
ā āāā worker/ # Background indexing (v1.1.0)
ā āāā IndexWorker.ts # Worker thread
ā āāā messages.ts # IPC messages
āāā tests/ # Unit tests (186 tests)
āāā plan.md # Architecture documentation
āāā package.json
āāā tsconfig.json
āāā README.md
Example Usage
Once connected to Codex CLI, you can use natural language:
- "Search for authentication logic in the codebase"
- "Show me the database schema files"
- "Get context about the API endpoints"
- "Find error handling patterns"
The server will automatically use the appropriate tools to provide relevant context.
Environment Variables
| Variable | Description | Default |
|---|---|---|
AUGMENT_API_TOKEN |
Auggie API token (or use auggie login) |
- |
AUGMENT_API_URL |
Auggie API URL | https://api.augmentcode.com |
CONTEXT_ENGINE_OFFLINE_ONLY |
Enforce offline-only policy (v1.1.0) | false |
REACTIVE_ENABLED |
Enable reactive review features | false |
REACTIVE_USE_AI_AGENT_EXECUTOR |
Use local AI agent for reviews (Phase 1) | false |
REACTIVE_ENABLE_MULTILAYER_CACHE |
Enable 3-layer caching (Phase 2) | false |
REACTIVE_ENABLE_BATCHING |
Enable request batching (Phase 3) | false |
REACTIVE_OPTIMIZE_WORKERS |
Enable CPU-aware worker optimization (Phase 4) | false |
REACTIVE_PARALLEL_EXEC |
Enable concurrent worker execution | false |
CE_INDEX_STATE_STORE |
Persist per-file index hashes to .augment-index-state.json |
false |
CE_SKIP_UNCHANGED_INDEXING |
Skip re-indexing unchanged files (requires CE_INDEX_STATE_STORE=true) |
false |
CE_HASH_NORMALIZE_EOL |
Normalize CRLF/LF when hashing (recommended with state store across Windows/Linux) | false |
CE_METRICS |
Enable in-process metrics collection (Prometheus format) | false |
CE_HTTP_METRICS |
Expose GET /metrics when running with --http |
false |
CE_AI_REQUEST_TIMEOUT_MS |
Default timeout for AI calls (searchAndAsk) in milliseconds |
120000 |
CE_SEARCH_AND_ASK_QUEUE_MAX |
Max queued searchAndAsk requests before rejecting (0 = unlimited) |
50 |
CE_TSC_INCREMENTAL |
Enable incremental tsc runs for static analysis |
true |
CE_TSC_BUILDINFO_DIR |
Directory to store tsbuildinfo cache (defaults to OS temp) |
(os tmp) |
CE_SEMGREP_MAX_FILES |
Max files per semgrep invocation before chunking | 100 |
CE_PLAN_AI_REQUEST_TIMEOUT_MS |
Timeout for planning AI calls in milliseconds (create_plan, refine_plan, step execution) |
300000 |
CE_HTTP_PLAN_TIMEOUT_MS |
HTTP POST /api/v1/plan request timeout in milliseconds |
360000 |
Metrics (optional)
To expose a Prometheus-style endpoint, start the server in HTTP mode and enable both flags:
export CE_METRICS=true
export CE_HTTP_METRICS=true
node dist/index.js --workspace /path/to/project --http --port 3333
Then fetch:
curl http://localhost:3333/metrics
Notes:
- Metrics are intended to use low-cardinality labels (avoid per-query/per-path labels).
- The in-process registry caps total series to prevent unbounded memory growth.
Offline-Only Mode (v1.1.0)
To enforce that no data is sent to remote APIs, set:
export CONTEXT_ENGINE_OFFLINE_ONLY=true
When enabled, the server will fail to start if a remote API URL is configured. This is useful for enterprise environments with strict data locality requirements.
Troubleshooting
Server not showing up in Codex CLI
- Check
~/.codex/config.tomlfor syntax errors - Ensure paths are absolute
- Restart Codex CLI
- Run
codex mcp listto see configured servers - Use
/mcpcommand in the TUI to check connection status
Authentication errors
Run auggie login or verify environment variables are set correctly.
No search results
Index your workspace first:
node dist/index.js --workspace /path/to/project --index
File watcher not detecting changes (v1.1.0)
- Ensure you started the server with
--watchflag - Check that the file is not in
.gitignoreor.contextignore - Wait for the debounce period (default: 500ms) after the last change
- Check server logs for watcher status messages
Offline-only mode blocking startup (v1.1.0)
If you see an error about offline-only mode:
- Remove the
CONTEXT_ENGINE_OFFLINE_ONLYenvironment variable, or - Configure a localhost API URL in
AUGMENT_API_URL
Tool timeout errors during plan generation (v1.4.0)
The create_plan tool can take longer than default MCP client timeouts for complex tasks. If you experience timeout errors, increase the timeout in your MCP client configuration:
For Codex CLI
Edit ~/.codex/config.toml and add or modify the tool_timeout_sec setting under the [mcp_servers.context-engine] section:
[mcp_servers.context-engine]
command = "node"
args = ["/absolute/path/to/context-engine/dist/index.js", "--workspace", "/path/to/your/project"]
tool_timeout_sec = 600 # 10 minutes for complex planning tasks
For Other MCP Clients
Consult your client's documentation for timeout configuration. Common locations:
- Claude Desktop:
~/Library/Application Support/Claude/claude_desktop_config.json(macOS) or%APPDATA%\Claude\claude_desktop_config.json(Windows) - Cursor:
.cursor/mcp.jsonin your workspace - Antigravity: Check client-specific configuration files
Add a timeout setting appropriate for your client's configuration format. A value of 600 seconds (10 minutes) is recommended for complex planning tasks.
Testing
# Run all tests
npm test
# Quieter ESM run (use if you see pipe/stream errors)
node --experimental-vm-modules node_modules/jest/bin/jest.js --runInBand --silent
# Run tests in watch mode
npm run test:watch
# Run tests with coverage
npm run test:coverage
# Interactive MCP testing
npm run inspector
Test Status: 397 tests passing (100% completion) ā
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.