
Expert Registry MCP Server
A high-performance MCP server for expert discovery, registration, and context injection, enabling AI-powered expert selection through semantic search and relationship modeling with vector and graph database integration.
README
Expert Registry MCP Server
Last Updated: 2025-06-30
A high-performance MCP server for expert discovery, registration, and context injection built with FastMCP v2, featuring vector and graph database integration for enhanced semantic search and relationship modeling.
Features
- 🚀 High Performance: Multi-layer caching with vector indices for sub-millisecond queries
- 📁 File-Based Updates: Hot reload on registry/context file changes
- 🔍 Semantic Search: Vector database integration for meaning-based expert discovery
- 🔗 Relationship Modeling: Graph database for expert networks and team formation
- 💉 Context Injection: AI-powered prompt enhancement with expert knowledge
- 📊 Analytics: Performance tracking with collaborative filtering
- 🧠 Hybrid Discovery: Combined vector similarity and graph connectivity scoring
- 🐍 Python-First: Built with FastMCP v2 for clean, Pythonic code
Installation
Docker (Recommended for Production)
The easiest way to run the Expert Registry MCP server is using Docker:
# Build and deploy locally
./scripts/build.sh
./scripts/deploy.sh
# Or use pre-built image from GitHub Container Registry
docker pull ghcr.io/agentience/expert-registry-mcp:latest
Features:
- 🐳 Single container service for multiple MCP clients
- 📦 Expert contexts and registry mapped to host for easy editing
- 🔄 Hot reload support when files change on host
- 🌐 SSE transport for client connections
- 🗄️ Includes Neo4j database setup
- 🚀 Production-ready with health checks
See DOCKER.md for complete deployment guide.
Local Development
Using uv (recommended):
# Create virtual environment and install
uv venv
uv pip install -e .
# Or install directly
uv pip install expert-registry-mcp
Using pip:
pip install expert-registry-mcp
Database Setup
Vector Database (ChromaDB - Embedded)
# ChromaDB is embedded, no separate installation needed
# It will create a vector-db directory automatically
Graph Database (Neo4j)
# Option 1: Docker (recommended)
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/password \
neo4j:latest
# Option 2: Local installation
# Download from https://neo4j.com/download/
Quick Start
- Set up your expert system directory structure:
expert-system/
├── registry/
│ └── expert-registry.json
├── expert-contexts/
│ ├── aws-amplify-gen2.md
│ ├── aws-cloudscape.md
│ └── ...
└── performance/
└── metrics.json
- Configure environment:
export EXPERT_SYSTEM_PATH=/path/to/expert-system
export NEO4J_URI=bolt://localhost:7687
export NEO4J_PASSWORD=password
- Run the server:
# Using FastMCP CLI
fastmcp run expert-registry-mcp
# Or using Python
python -m expert_registry_mcp.server
Claude Desktop Configuration
Add to your Claude Desktop configuration:
{
"mcpServers": {
"expert-registry": {
"command": "uv",
"args": ["run", "expert-registry-mcp"],
"env": {
"EXPERT_SYSTEM_PATH": "/path/to/expert-system",
"NEO4J_URI": "bolt://localhost:7687",
"NEO4J_PASSWORD": "password"
}
}
}
}
Usage Examples
Basic Expert Discovery
# Detect technologies in your project
technologies = await expert_detect_technologies(
scan_paths=["./src", "./package.json"]
)
# Select the best expert with hybrid search
result = await expert_smart_discover(
context={
"description": "Refactor authentication system using AWS Amplify",
"technologies": technologies.technologies,
"constraints": ["maintain backward compatibility"],
"preferred_strategy": "single"
}
)
Context Injection
# Load expert context
context = await expert_load_context(
expert_id=result.expert.id
)
# Inject into prompt
enhanced_prompt = await expert_inject_context(
prompt="Refactor the authentication system",
expert_id=result.expert.id,
injection_points=["constraints", "patterns", "quality-criteria"]
)
Performance Tracking
# Track usage
await expert_track_usage(
expert_id=result.expert.id,
task_id="auth-refactor-001",
outcome={
"success": True,
"adherence_score": 9.5,
"task_type": "refactoring"
}
)
# Get analytics
analytics = await expert_get_analytics(
expert_id=result.expert.id
)
Available Tools
Registry Management
expert_registry_list
- List experts with filteringexpert_registry_get
- Get expert detailsexpert_registry_search
- Search experts by query
Expert Selection
expert_detect_technologies
- Detect project technologiesexpert_select_optimal
- Select best expert for taskexpert_assess_capability
- Assess expert capabilityexpert_smart_discover
- AI-powered hybrid search (vector + graph)
Semantic Search
expert_semantic_search
- Search using natural languageexpert_find_similar
- Find similar experts
Graph Operations
expert_explore_network
- Explore expert relationshipsexpert_find_combinations
- Find complementary expert teams
Context Operations
expert_load_context
- Load expert knowledgeexpert_inject_context
- Enhance prompts with expertise
Analytics
expert_track_usage
- Record expert performanceexpert_get_analytics
- Get performance metrics
Expert Registry Format
{
"version": "1.0.0",
"last_updated": "2025-06-30T00:00:00Z",
"experts": [
{
"id": "aws-amplify-gen2",
"name": "AWS Amplify Gen 2 Expert",
"version": "1.0.0",
"description": "Expert in AWS Amplify Gen 2 development",
"domains": ["backend", "cloud", "serverless"],
"specializations": [
{
"technology": "AWS Amplify Gen 2",
"frameworks": ["AWS CDK", "TypeScript"],
"expertise_level": "expert"
}
],
"workflow_compatibility": {
"feature": 0.95,
"bug-fix": 0.85,
"refactoring": 0.80,
"investigation": 0.70,
"article": 0.60
},
"constraints": [
"Use TypeScript-first approach",
"Follow AWS Well-Architected Framework"
],
"patterns": [
"Infrastructure as Code",
"Serverless-first architecture"
],
"quality_standards": [
"100% type safety",
"Comprehensive error handling"
]
}
]
}
Expert Context Format
Expert context files are markdown documents in expert-contexts/
:
# AWS Amplify Gen 2 Expert Context
## Constraints
- Use TypeScript for all backend code
- Follow AWS Well-Architected Framework principles
- Implement proper error handling and logging
## Patterns
- Infrastructure as Code using CDK
- Serverless-first architecture
- Event-driven communication
## Quality Standards
- 100% TypeScript type coverage
- Comprehensive error handling
- Unit test coverage > 80%
Development
Setup Development Environment
# Clone repository
git clone https://github.com/agentience/expert-registry-mcp
cd expert-registry-mcp
# Create virtual environment with uv
uv venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
# Install in development mode
uv pip install -e ".[dev]"
Run Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=expert_registry_mcp
# Run specific test file
pytest tests/test_registry.py
Code Quality
# Format code
black src tests
# Lint code
ruff check src tests
# Type checking
mypy src
Architecture
Multi-Layer Caching
- Registry Cache: 24-hour TTL for expert definitions
- Vector Cache: Embeddings cached until expert updates
- Graph Cache: Relationship queries cached for 10 minutes
- Selection Cache: 5-minute TTL for technology detection
- Context Cache: LRU cache for expert contexts (50 entries)
Database Integration
- ChromaDB: Embedded vector database for semantic search
- Multiple collections for different embedding types
- Automatic embedding generation with sentence-transformers
- Neo4j: Graph database for relationship modeling
- Expert-Technology-Task relationships
- Team synergy calculations
- Evolution tracking
Performance Features
- Vector Indices: Annoy indices for ultra-fast similarity search
- Precomputed Combinations: Common expert pairs cached
- Batch Operations: Efficient bulk processing
- Smart Invalidation: Targeted cache updates
File Watching
- Uses
watchdog
for cross-platform file monitoring - Automatic registry reload and database sync
- No server restart required for updates
Troubleshooting
Common Issues
-
Expert not found
- Verify expert ID in registry
- Check file paths are correct
- Ensure registry JSON is valid
-
Context file missing
- Check expert-contexts directory
- Verify filename matches expert ID
- Ensure .md extension
-
Cache not updating
- File watcher may need restart
- Check file permissions
- Verify EXPERT_SYSTEM_PATH
Debug Mode
Enable debug logging:
export FASTMCP_DEBUG=1
expert-registry-mcp
Advanced Features
Semantic Search
The system uses ChromaDB to enable natural language queries:
# Find experts by meaning, not just keywords
results = await expert_semantic_search(
query="implement secure authentication with cloud integration",
search_mode="hybrid"
)
Relationship Exploration
Neo4j powers sophisticated relationship queries:
# Explore expert networks
network = await expert_explore_network(
start_expert_id="aws-amplify-gen2",
depth=2,
relationship_types=["SPECIALIZES_IN", "COMPATIBLE_WITH"]
)
Team Formation
AI-powered team composition:
# Find complementary expert teams
teams = await expert_find_combinations(
requirements=["AWS Amplify", "React", "DynamoDB"],
team_size=3
)
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Run tests and linting
- Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
MIT License - see LICENSE file for details
Support
- Documentation: https://github.com/agentience/expert-registry-mcp
- Issues: https://github.com/agentience/expert-registry-mcp/issues
- Discussions: https://github.com/agentience/expert-registry-mcp/discussions
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.