Cortex MCP Server

Cortex MCP Server

Cortex MCP provides AI agents with long-term portfolio memory by transforming project history and developer decisions into a structured, queryable knowledge graph. It enables assistants to maintain context across different repositories by exposing historical patterns, architectural decisions, and technology preferences through the Model Context Protocol.

Category
Visit Server

README

Cortex MCP Server

Portfolio memory for AI agents.
Transforms your real project history into structured context that any AI assistant can query in real time.

License: MIT Node.js 20+ TypeScript Strict CI

⚠️ Status: Active MVP — functional and tested, but under active development. Feedback and contributions are welcome.


Table of Contents


What It Is

Cortex MCP is a local server that implements the Model Context Protocol (MCP) — the open standard that lets AI assistants safely access external data.

It reads synthesized knowledge about your projects (a lightweight adjacency knowledge graph mapping relations between apps, technologies, and domains; patterns; observations; developer profile) and exposes it as tools, resources, and prompts consumable by any MCP-compatible agent.

The Problem

When you open Claude, Copilot, or Cursor in a project, the agent does not know:

  • What other projects you have and how they connect
  • Which stack you prefer and why
  • Which patterns you use repeatedly
  • Which components could be reused
  • Your real experience with each technology

Every session starts from scratch — amnesiac pair programming.

The Solution

Cortex gives AI agents the same long-term memory you have as a developer. It accumulates knowledge across repositories and makes it instantly accessible in every session.

"AI is your mirror — it reveals who you are faster. If you are incompetent, it will produce bad things faster. If you are competent, it will produce good things faster." — Akita

Cortex works like the "CLAUDE.md" of your entire portfolio — not of one project, but of your whole career.

Problem Without Cortex With Cortex
Agent suggests a stack Generic, based on popularity Based on your real history
Agent solves a problem Standard solution, may reinvent the wheel "You already did this in X, here it is"
Architecture decision Over-engineering (agent never says no) "Your pattern is to simplify, see Y"
Context lost between sessions Starts from zero every time Accumulates decisions, patterns, pitfalls

Complete Privacy

100% local, no cloud, no telemetry. Your data never leaves your machine.


How It Works — The Three Layers

Read this section before using. Most users only use Layer 1 and find the system "shallow". The real value is in Layers 2 and 3.

Cortex does not learn passively. It is a structured knowledge repository — the more you feed it, the more useful it becomes. The right mental model: a portfolio wiki that AI agents query in real time.

Layer 1 — Automatic Scanner (~20% of value)

cortex-mcp scan automatically detects:

  • Technology stack (language, frameworks, databases, CI, Docker)
  • Commit frequency and contributor history
  • Recurring patterns across repositories
  • Initial operator profile

Important limitation: the scanner only sees what is in the code and git history. It does not know why you chose a technology, what problems you encountered, or what you learned. A project with 1 commit (shallow clone) generates a very poor profile.

Layer 2 — Curation via MCP Tools (~60% of value)

The real power of Cortex is fed by you during work sessions:

Tool When to use Example
add_observation When you learn something relevant, find a pitfall, measure a metric "Accuracy bottleneck is dataset size, not architecture"
track_decision Every architecture or product decision with rationale "Chose REST over GraphQL because it is simple CRUD"
add_pattern When you identify something you repeat across projects "FastAPI + PostgreSQL + Redis for APIs with caching"
start_session When starting work on any project Automatically injects historical context
end_session When finishing — with summary and next steps Accumulates progression between sessions
add_skill For prompt templates you reuse "How to do a code review in Python projects"

⚠️ The real challenge: Layer 2 is where 60% of the value lives, but it requires actively remembering to call start_session, track_decision, and end_session every day.

Most developers will not sustain this habit without intentional effort. The gap between "installed" and "actually useful" is real — Cortex is only as good as the discipline you bring to curation.

The roadmap item "Dynamic prompts based on session history" points toward closing this gap, but it has not shipped yet. In the meantime: set up reminders, or make these calls part of your team's Definition of Done.

Layer 3 — Operator Profile (~20% of value)

The file knowledge/operator-profile.yaml is the developer's personal context. It is generated automatically by the scanner but requires manual curation to have real substance. Edit it directly:

identity:
  name: Your Name
  role: Your Role
  domain: Your areas of expertise
  github: https://github.com/your-username

Third-Party Repositories

You can add third-party repositories (public or private) as sources for analysis and comparison.

  • Cortex incorporates these signals into your local context (stack, patterns, conventions, relations).
  • This makes suggestions more contextualized, not "smarter" by magic.
  • Use with judgment: respect licenses, confidentiality, and the separation between your own portfolio and external references.
  • To avoid confusion, name external sources explicitly (e.g., --name ref-repo-name).

Quick Start

Prerequisites

  • Node.js 20+
  • Git

Install and Build

git clone https://github.com/BUGG1N/cortex-mcp.git cortex-mcp
cd cortex-mcp
npm install
npm run build

Generate Knowledge

# 1. Initialize the knowledge directory
npx cortex-mcp init

# 2. Add repositories (local, public GitHub, or private with token)
npx cortex-mcp add ./my-project
npx cortex-mcp add https://github.com/user/public-repo
npx cortex-mcp add https://github.com/user/private-repo --token ghp_xxx

# 3. Scan everything → generates knowledge files automatically
npx cortex-mcp scan

# 4. Start the MCP server
npx cortex-mcp

Security: the token is only used to authenticate Git operations and is not written into the remote URL of the cloned repository.

⚠️ Repositories cloned from a URL arrive with a shallow history. This causes the scanner to report 1 commit and degrades the quality of the generated profile. To fix:

cd repos/repo-name && git fetch --unshallow
cd ../.. && npx cortex-mcp scan

Configure Your Profile (recommended)

After the first scan, edit knowledge/operator-profile.yaml to add real context:

identity:
  name: Your Real Name
  role: Your Role (e.g. Founder CTO, Senior Engineer)
  domain: Your domains (e.g. fintech, IoT, healthcare)
  github: https://github.com/your-username

Without this, the profile defaults to name: Developer with expertise inferred only by commit volume.

Enrich With Real Context (where the value is)

The scanner generates a starting point. Useful knowledge comes from curation — via MCP tools with the agent connected:

# Examples of prompts that feed Cortex automatically:
"Record that I chose FastAPI over Flask because I needed native async"
"Add an observation to project-x that the accuracy bottleneck is dataset size, not architecture"
"Start a work session on project-y focused on Sprint 0"

As the agent executes these actions, Cortex persists the knowledge in YAML/JSONL files and makes it available in all future sessions.

Connect to Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "cortex": {
      "command": "node",
      "args": ["/absolute/path/to/cortex-mcp/dist/cli.js"]
    }
  }
}

Windows:

{
  "mcpServers": {
    "cortex": {
      "command": "node",
      "args": ["C:\\dev\\cortex-mcp\\dist\\cli.js"]
    }
  }
}

macOS/Linux:

{
  "mcpServers": {
    "cortex": {
      "command": "node",
      "args": ["/home/user/cortex-mcp/dist/cli.js"]
    }
  }
}

Restart Claude Desktop. You will see "cortex" available with tools and resources.

Connect to VS Code (GitHub Copilot)

Option 1 — Workspace only

In .vscode/mcp.json at the Cortex project root:

{
  "servers": {
    "cortex": {
      "command": "node",
      "args": ["<path>/dist/cli.js"]
    }
  }
}

Option 2 — Global (recommended for portfolios)

To use Cortex in any VS Code window (e.g., open another project and still have access to the full knowledge base), create the global MCP config file:

File location:

  • Windows: %APPDATA%\Code\User\mcp.json
  • macOS: ~/Library/Application Support/Code/User/mcp.json
  • Linux: ~/.config/Code/User/mcp.json
{
  "servers": {
    "cortex": {
      "command": "node",
      "args": ["<path>/dist/cli.js"],
      "type": "stdio",
      "env": {
        "CORTEX_ROOT": "<path>"
      }
    }
  }
}

⚠️ CORTEX_ROOT is required in the global configuration. Without it, Cortex tries to find the root from the current working directory (cwd). When VS Code opens another project, the cwd is that project — and Cortex cannot find the knowledge/ folder, returning empty results. CORTEX_ROOT resolves this by explicitly pointing to the directory where data is stored.

Replace <path> with the absolute path to the Cortex root (e.g., C:\\dev\\CORTEX on Windows, /home/user/cortex on Linux).


Daily Workflow

Cortex is most useful when integrated into the natural development rhythm, not only used during initial setup.

Starting work on a project

[in the agent chat, Agent mode]
"Start a session on project X focused on [goal]"

start_session automatically injects: project stack, previous observations, relevant patterns, last recorded decisions, and operator context.

During work

"Record that I found an N+1 query problem in the /observations endpoint"
"Add the decision to use Alembic instead of manual migrations, reason: traceability"
"Note that the current model accuracy is below the target threshold — the bottleneck is dataset size, not architecture"

When finishing

"End the session with summary: [what was done], decisions: [list], next steps: [list]"

Returning after days

"What do you know about project X?" → the agent queries Cortex and summarizes current state
"What were the last decisions on project Y?" → returns curated decisions log
"What is my pattern for Python APIs?" → returns from the knowledge store

Periodic maintenance (weekly/biweekly)

# Update GitHub sources
npx cortex-mcp sync

# Re-scan after stack changes
npx cortex-mcp scan

# Diagnostics if something seems wrong
npx cortex-mcp doctor

CLI

cortex-mcp [command] [options]

Commands:
  init                    Initialize knowledge directory with empty files
  add <source>            Add a repository source (local path or GitHub URL)
  scan                    Scan all sources and update the knowledge base
  sync                    Update GitHub sources (pull latest)
  sources                 List all configured sources
  remove <name>           Remove a source
  serve                   Start the MCP server (default if no command given)
  doctor                  Diagnose the environment and knowledge files

Adding sources:
  cortex-mcp add ./local/path                  Local directory
  cortex-mcp add https://github.com/u/repo     Public GitHub repo (auto-clones)
  cortex-mcp add https://github.com/u/repo --token ghp_xxx   Private repo
  cortex-mcp add <source> --name my-name       Custom name
  cortex-mcp add <source> --branch develop     Specific branch

Options:
  --root <path>   Root directory (auto-detected from cwd)
  --no-watch      Disable file watching for hot-reload
  --help, -h      Show help
  --version, -v   Show version

Environment variables:

  • CORTEX_ROOT — overrides root directory
  • CORTEX_KNOWLEDGE_PATH — overrides knowledge files path
  • GITHUB_TOKEN — GitHub token for private repositories (alternative to --token)

What It Exposes via MCP

Tools

Read tools are used automatically by the agent when answering questions. Curation tools are how knowledge is accumulated — each call persists something to YAML/JSONL files.

Reading and Querying

Tool What it does
search_portfolio Full-text search across projects, technologies, and patterns
get_app_context Complete context for an app: stack, patterns, connections
query_graph Navigate the knowledge graph from any entity
who_uses List apps that use a given technology
find_similar_apps Find similar apps by stack overlap
get_portfolio_overview Overview: profile, stats, top techs, distribution
find_patterns Architectural and workflow patterns identified
get_conventions Code conventions for a specific context
find_reusable Find reusable components/projects
get_tech_radar Technology radar: adopt / experiment / assess / hold
suggest_stack Suggest a stack for a new project with reasoning and risk analysis
run_health Health check of the knowledge base
get_portfolio_diff Portfolio changes in the last N days
compare_stacks Compare stacks between two projects
get_module_map Intra-repository import/module map
export_context_bundle Export portfolio as a single bundle for onboarding
get_file Read a file from a portfolio repository
grep_codebase Regex search in a repository's code
get_file_tree Directory structure of a repository
list_skills List available skills/prompts (built-in + custom)
get_skill Return a full skill
invoke_skill Execute a skill with context injection

Curation and Knowledge Accumulation (use actively — this is where the real value is)

Tool What it persists When to use
add_observation Observation in observations.jsonl Learnings, real metrics, pitfalls encountered
track_decision Decision with rationale in observations.jsonl Every architecture, stack, or product choice
add_pattern Reusable pattern in patterns.yaml When you identify something you repeat across projects
update_app_status Status/health in registry.yaml After significant project state changes
start_session Opens session in sessions.jsonl with injected context When starting any work session
end_session Closes session with summary and next steps When finishing — do not skip this step
add_skill Prompt template in skills.yaml For prompts you reuse frequently

Resources

URI Content
cortex://portfolio Complete portfolio overview (JSON)
cortex://graph Full knowledge graph
cortex://registry All apps with metadata
cortex://patterns All identified patterns
cortex://profile Developer profile
cortex://stats Quick numeric statistics
cortex://app/{id} Complete context for any app
cortex://app/{id}/file/{path} A repository file accessed via URI
cortex://sessions/{appId} Session history for an app
cortex://skills List all skills
cortex://skill/{id} Content of a specific skill

Prompts

Prompt Function
session-context Session bootstrap — injects profile, app, patterns, conventions
code-review Code review with stack and pattern awareness
new-project Plan a new project based on portfolio history

Knowledge Store

Cortex reads these files from the knowledge/ directory. Each file has an origin and an expected utility level:

File Generated by Curated by you Content
knowledge-graph.yaml auto scan Not required Entities + relations (apps, techs, domains)
registry.yaml auto scan update_app_status App registry with metadata (stack, health, status)
operator-profile.yaml auto scan Manual editing recommended Developer profile (name, domain, expertise)
patterns.yaml scan (partial) add_pattern Recurring architectural and workflow patterns
observations.jsonl Never automatic add_observation, track_decision Observations, decisions, pitfalls — the most valuable file
sessions.jsonl Never automatic start_session / end_session Session history per app
skills.yaml Never automatic add_skill User-defined skills/prompts
sources.yaml add + scan Not required Registered sources (local paths, GitHub URLs)

observations.jsonl is the most important file and the only one that is never populated automatically. A portfolio without observations only has detected stack — no memory of why things were done that way.

operator-profile.yaml is generated with name: Developer and expertise inferred by commit volume. For real substance, edit it manually adding name, domain, and personal context.

Example Data

The examples/knowledge/ directory contains a complete example portfolio with 4 apps, 14 technologies, 24 relations, 6 patterns, and 7 observations. Use it as a reference to understand the file structure or as a starting point for your own portfolio.


Architecture

cortex-mcp/
├── src/
│   ├── cli.ts                  # CLI entry point (init, add, scan, sync, serve)
│   ├── config.ts               # Config resolution (flags → yaml → env → defaults)
│   ├── index.ts                # Public API exports
│   ├── server.ts               # MCP server (stdio transport)
│   ├── types.ts                # Core TypeScript types
│   ├── engine/
│   │   ├── knowledge-engine.ts # Orchestrator — load, index, query
│   │   ├── yaml-parser.ts      # YAML/JSONL parsers
│   │   ├── search-index.ts     # MiniSearch full-text index
│   │   ├── graph-traversal.ts  # BFS/DFS graph queries
│   │   ├── file-reader.ts      # Repository file reader
│   │   └── file-watcher.ts     # Hot-reload via Chokidar
│   ├── mcp/
│   │   ├── tools/              # MCP tools
│   │   ├── resources/          # MCP resources
│   │   └── prompts/            # MCP prompts
│   ├── scanner/
│   │   ├── index.ts            # Scanner orchestrator
│   │   └── detectors/          # Stack auto-detection
│   │       ├── package-json.ts # Node.js / TypeScript
│   │       ├── python.ts       # Python (pip, poetry, pipenv)
│   │       ├── java.ts         # Java (Maven, Gradle)
│   │       ├── dotnet.ts       # .NET / C#
│   │       ├── go.ts           # Go (go.mod)
│   │       ├── rust.ts         # Rust (Cargo.toml)
│   │       ├── infra.ts        # Terraform, Kubernetes, Helm
│   │       ├── docker.ts       # Docker / containerization
│   │       ├── ci.ts           # CI/CD (GitHub Actions, GitLab, Jenkins)
│   │       └── git.ts          # Git metadata (commits, contributors)
│   ├── sources/
│   │   └── index.ts            # Source manager (local, GitHub)
│   └── writer/
│       └── index.ts            # Knowledge writer (YAML/JSONL persistence)
├── test/                       # 35 test files, 90 tests
├── package.json
├── tsconfig.json
└── tsup.config.ts

Design Principles

  1. Zero heavy dependencies — No database, no Docker, no cloud. Reads local files, serves via stdio.
  2. Synthesized knowledge, not raw code — The agent receives patterns, relations, and decisions — not 10k lines of code.
  3. Generalizable — Works with any collection of repositories, not tied to a specific portfolio.
  4. Fast — All data loaded in-memory at startup. Typical response < 100ms.
  5. Privacy-first — All data stays on your machine. Nothing is sent externally.

Tech Stack

Component Technology
Runtime Node.js 20+
Language TypeScript (strict, zero any)
MCP SDK @modelcontextprotocol/sdk
YAML yaml (npm)
Search MiniSearch (BM25 + lexical fuzzy)
Watch Chokidar (debounce 500ms)
Build tsup (ESM, Node 20 target)
Tests Vitest

Development

Commands

npm install          # Install dependencies
npm run build        # Compile with tsup
npm run typecheck    # TypeScript check (zero errors expected)
npm run test         # Run tests
npm run dev          # Watch mode

Full contribution guide: CONTRIBUTING.md

Project Rules

  • TypeScript strict — zero any, no @ts-ignore, no @ts-expect-error
  • 100% in-memory data after startup — no external database
  • MCP transport: stdio (JSON-RPC)
  • Tool results capped at MAX_ITEMS=10 to avoid context overflow
  • Hot-reload via file watcher (chokidar, debounce 500ms)
  • Tests with Vitest — coverage focused on critical paths, error paths, and resilience

Code Conventions

Element Convention
Files kebab-case.ts
Classes PascalCase
Functions/vars camelCase
Constants UPPER_SNAKE_CASE
Imports ESM (.js extension required)

Contribution Checklist

  • [ ] Tests written and passing
  • [ ] npm run build without errors
  • [ ] npm run typecheck zero errors
  • [ ] npm test all pass
  • [ ] Code follows project conventions
  • [ ] No any, no @ts-ignore

Positioning

Tool Multi-repo Tech Graph Dev Profile Local MCP Privacy Cost
Cortex MCP ✅ Total Free
Anthropic Memory ⚠️ Free
Mem0/OpenMemory ⚠️ ⚠️ ⚠️ ✅ Opt. Freemium
Repomix ⚠️ Free
Sourcegraph Cody ⚠️ ✅ Self $$$
Augment Code ⚠️ ❌ Cloud $$

This table compares features, not maturity. Cortex is an individual MVP-stage project; the other tools have teams, communities, and larger production histories.

Cortex combines multi-repo scanning + local technology graph + developer profile + delivery via local MCP.


Usage Examples

Building a New Project

Without Cortex:

You: "I need to build a real-time chat app"
Claude: "You can use Socket.IO with Express..."
(Generic advice, no context about your experience)

With Cortex:

You: "I need to build a real-time chat app"
Claude: "I can see in your portfolio that you have experience with Socket.IO
in 'collab-tool' and 'gaming-platform'. You prefer NestJS for structured APIs.
I can reuse your auth middleware from 'user-service'
and the Redis session pattern from 'marketplace-backend'."

Decisions With Context

Question: "GraphQL or REST for this project?"

Answer with Cortex: "Looking at your history: you used GraphQL in 2 of 12 projects, both complex dashboards with many data relations. For simple CRUD APIs, you consistently chose REST + Express. This project looks like CRUD, so REST aligns with your proven patterns."

Framework Analysis (NestJS)

Complete example of knowledge extraction from NestJS in examples/nestjs-analysis.md — demonstrating automatic detection of 18 technologies, enterprise patterns, and integration strategies.


Context Overhead

"Will loading all this knowledge eat up my context window?"

No — and here is the proof.

Token budget per session

Element Approx. tokens
Tool call invocation (name + schema) ~150
start_session response (stack, last 3 decisions, patterns, operator profile) ~1,200
Additional tool calls during session (3–5 ×) ~600–1,000
Total session bootstrap ~2,000–2,500

Token counts measured on real start_session responses serialised as JSON-RPC. Your figures will be slightly lower if your knowledge base is sparse, or higher if you have dense operator-profile notes.

As a percentage of the context window

Model Context window Cortex budget % used
Claude Sonnet 3.7 / 3.5 200,000 tokens ~2,500 ~1.25 %
GPT-4o 128,000 tokens ~2,500 ~1.95 %
GitHub Copilot (GPT-4o) 128,000 tokens ~2,500 ~1.95 %
Gemini 2.0 Flash 1,048,576 tokens ~2,500 ~0.24 %

Return on investment

For ≈ 2 % of your context window you get injected into every session:

  • Full technology stack of the active project
  • Last recorded architectural decisions with rationale
  • Accumulated patterns across all your repositories
  • Pitfalls and observations logged over time
  • Your complete operator profile (domain, preferences, conventions)

Compare that to a typical codebase dump with Repomix (50,000–300,000 tokens — 25–150 % of a 200k window). Cortex gives you the knowledge layer — the distilled "why" and "how" — at a fraction of the cost of pasting raw source files.

The ratio: ~1–2 % of context window → portfolio-wide memory. That is a return of 50–100× on context invested.


Roadmap

  • [x] Multi-stack scanner (10 languages/platforms)
  • [x] 29 MCP tools + resources + prompts
  • [x] Runtime validation with Zod in all handlers
  • [x] 90 tests in 35 files (happy paths + error paths + resilience + concurrency + CLI e2e + MCP contract + stdio integration)
  • [x] CLI doctor for first-use diagnostics
  • [x] GitHub Actions CI (Node 20 + 22)
  • [x] Example data directory (examples/knowledge/)
  • [ ] Semantic search with embeddings (currently lexical + synonym expansion)
  • [ ] Plugin ecosystem for custom detectors
  • [ ] Local web dashboard to visualize the knowledge graph
  • [ ] Dynamic prompts based on session history
  • [ ] npm publication (npx cortex-mcp init)

References

Resource Link
Model Context Protocol https://modelcontextprotocol.io/
Anthropic MCP Memory https://github.com/modelcontextprotocol/servers/tree/main/src/memory
Graphiti (Zep) https://github.com/getzep/graphiti
Mem0 https://github.com/mem0ai/mem0
Repomix https://github.com/yamadashy/repomix

Support

If Cortex saves you time, consider buying me a coffee ☕

Crypto

Network Address
Bitcoin (BTC) bc1qwvmzcy62c9kcd44zy67s57cn6pktmnctjk9zws
Ethereum / EVM (ETH) 0x797eca0D88f92d08Ccc6dd10E3DEcFEacAc511Ce

Tip: Use a wallet created specifically for donations — never your main or trading wallet. For Ethereum you can register a human-readable ENS name (e.g. yourname.eth) so the address is easy to share and verify.

PIX (Brazil)

Chave PIX: 4978dd10-e12d-42e8-8a32-257ad00594e3

You can also support the project by:

  • ⭐ Starring the repository
  • 🐛 Reporting bugs or requesting features via Issues
  • 🔀 Opening a pull request

License

MIT

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured