claude-skills-mcp

claude-skills-mcp

Enables intelligent semantic search and discovery of relevant Claude Agent Skills using vector embeddings. Provides access to curated scientific skills and supports both GitHub repositories and local skill directories.

Category
Visit Server

README

Claude Skills MCP Server

Tests Python 3.12 License Code style: ruff PyPI version

Use Claude's powerful new Skills system with ANY AI model or coding assistant - including Cursor, Codex, GPT-5, Gemini, and more. This MCP server brings Anthropic's Agent Skills framework to the entire AI ecosystem through the Model Context Protocol.

A Model Context Protocol (MCP) server that provides intelligent search capabilities for discovering relevant Claude Agent Skills using vector embeddings and semantic similarity. This server implements the same progressive disclosure architecture that Anthropic describes in their Agent Skills engineering blog, making specialized skills available to any MCP-compatible AI application.

An open-source project by K-Dense AI - creators of autonomous AI scientists for scientific research.

This MCP server enables any MCP-compatible AI assistant to intelligently search and retrieve skills from our curated Claude Scientific Skills repository and other skill sources like the Official Claude Skills. If you want substantially more advanced capabilities, compute infrastructure, and enterprise-ready AI scientist offerings, check out K-Dense AI's commercial platform.

Demo

Claude Skills MCP in Action

Semantic search and progressive loading of Claude Agent Skills in Cursor

Features

  • 🔍 Semantic Search: Vector embeddings for intelligent skill discovery
  • 📚 Progressive Disclosure: Multi-level skill loading (metadata → full content → files)
  • 🚀 Zero Configuration: Works out of the box with curated skills
  • 🌐 Multi-Source: Load from GitHub repositories and local directories
  • Fast & Local: No API keys needed, with automatic GitHub caching
  • 🔧 Configurable: Customize sources, models, and content limits

Quick Start

For Cursor Users

Install with one command (handles everything automatically):

curl -sSL https://raw.githubusercontent.com/K-Dense-AI/claude-skills-mcp/main/setup-cursor.sh | bash

Then restart Cursor. See Setup Instructions for details.

Using uvx (Standalone)

Run the server with default configuration (no installation required):

uvx claude-skills-mcp

This loads ~90 skills from Anthropic's official skills repository and K-Dense AI's scientific skills collection.

With Custom Configuration

To customize skill sources or search parameters:

# 1. Print the default configuration
uvx claude-skills-mcp --example-config > config.json

# 2. Edit config.json to your needs

# 3. Run with your custom configuration
uvx claude-skills-mcp --config config.json

Setup for Your AI Assistant

Cursor (Recommended)

⚠️ Important: To avoid startup timeout issues, run our one-line setup script before configuring Cursor.

Quick Install (One Command)

curl -sSL https://raw.githubusercontent.com/K-Dense-AI/claude-skills-mcp/main/setup-cursor.sh | bash

This script will:

  • ✅ Install uv if needed (no root/sudo required)
  • ✅ Pre-download all dependencies (~250 MB, one-time)
  • ✅ Auto-configure Cursor's MCP settings
  • ✅ Takes 60-120 seconds on first run

Then restart Cursor and you're done! 🎉

The script is completely safe:

  • No root access required
  • Installs only to your home directory (~/.cargo, ~/.cache, ~/.cursor)
  • Creates backups of existing configurations
  • View source code

Alternative: Cursor Directory (Visual Setup)

You can also add this MCP server through Cursor's UI:

  1. Visit Claude Skills MCP on Cursor Directory
  2. Click "Add MCP server to Cursor"
  3. Important: Run the pre-cache command first to avoid timeouts:
    uvx claude-skills-mcp --help
    

<details> <summary>Manual Configuration (Advanced)</summary>

If you prefer to configure manually:

Step 1: Pre-cache dependencies (critical to avoid timeout):

uvx claude-skills-mcp --help

Wait for it to complete (~60-120 seconds).

Step 2: Add to Cursor's MCP settings (~/.cursor/mcp.json):

{
  "mcpServers": {
    "claude-skills": {
      "command": "uvx",
      "args": ["claude-skills-mcp"]
    }
  }
}

Step 3: Restart Cursor

Your configuration will use uvx which auto-updates the server when new versions are released. </details>

Claude Desktop

Add to your MCP settings:

{
  "mcpServers": {
    "claude-skills": {
      "command": "uvx",
      "args": ["claude-skills-mcp"]
    }
  }
}

Restart Claude Desktop to activate.

Other MCP-Compatible Tools

Any tool supporting the Model Context Protocol can use this server via uvx claude-skills-mcp. Consult your tool's MCP configuration documentation.

Architecture

Built on five core components: Configuration (JSON-based config loading), Skill Loader (GitHub + local with automatic caching), Search Engine (sentence-transformers vector search), MCP Server (three tools with stdio transport), and CLI Entry Point (argument parsing and lifecycle management).

See Architecture Guide for detailed design, data flow, and extension points.

Configuration

The server uses a JSON configuration file to specify skill sources and search parameters.

Default Configuration

If no config file is specified, the server uses these defaults:

{
  "skill_sources": [
    {
      "type": "github",
      "url": "https://github.com/anthropics/skills"
    },
    {
      "type": "github",
      "url": "https://github.com/K-Dense-AI/claude-scientific-skills"
    },
    {
      "type": "local",
      "path": "~/.claude/skills"
    }
  ],
  "embedding_model": "all-MiniLM-L6-v2",
  "default_top_k": 3,
  "max_skill_content_chars": null
}

This loads ~90 skills by default: 15 from Anthropic (document tools, web artifacts, etc.) + 78 from K-Dense AI (scientific analysis tools) + any custom local skills.

Configuration Options

Option Type Default Description
skill_sources Array Anthropic repo GitHub repos or local paths
embedding_model String all-MiniLM-L6-v2 Sentence-transformers model
default_top_k Integer 3 Number of results to return
max_skill_content_chars Integer/null null Content truncation limit
load_skill_documents Boolean true Load additional skill files
max_image_size_bytes Integer 5242880 Max image size (5MB)

To customize, run uvx claude-skills-mcp --example-config > config.json to see all options, or check Usage Guide for advanced patterns.

MCP Tools

The server provides three tools for working with Claude Agent Skills:

  1. search_skills - Semantic search for relevant skills based on task description
  2. read_skill_document - Retrieve specific files (scripts, data, references) from skills
  3. list_skills - View complete inventory of all loaded skills (for exploration/debugging)

See API Documentation for detailed parameters, examples, and best practices.

Quick Examples

Find skills: "I need to analyze RNA sequencing data"
Access files: "Show me Python scripts from the scanpy skill"
List all: "What skills are available?"

For task-oriented queries, prefer search_skills over list_skills.

Skill Format

The server searches for SKILL.md files with the following format:

---
name: Skill Name
description: Brief description of what this skill does
---

# Skill Name

[Full skill content in Markdown...]

Technical Details

Dependencies

  • mcp>=1.0.0 - Model Context Protocol
  • sentence-transformers>=2.2.0 - Vector embeddings (uses CPU-only PyTorch on Linux)
  • numpy>=1.24.0 - Numerical operations
  • httpx>=0.24.0 - HTTP client for GitHub API

Note on PyTorch: This project uses CPU-only PyTorch on Linux systems to avoid unnecessary CUDA dependencies (~3-4 GB). This significantly reduces Docker image size and build time while maintaining full functionality for semantic search.

Python Version

  • Requires: Python 3.12 (not 3.13)
  • Dependencies are automatically managed by uv/uvx

Performance

  • Startup time: ~10-20 seconds (loads SKILL.md files only with lazy document loading)
  • Query time: <1 second for vector search
  • Document access: On-demand with automatic disk caching
  • Memory usage: ~500MB (embedding model + indexed skills)
  • First run: Downloads ~100MB embedding model (cached thereafter)
  • Docker image size: ~1-2 GB (uses CPU-only PyTorch, no CUDA dependencies)

How It Works

This server implements Anthropic's progressive disclosure architecture:

  1. Startup: Load SKILL.md files from GitHub/local sources, generate vector embeddings
  2. Search: Match task queries against skill descriptions using cosine similarity
  3. Progressive Loading: Return metadata → full content → referenced files as needed
  4. Lazy Document Loading: Additional skill documents fetched on-demand with automatic disk caching
  5. Two-Level Caching: GitHub API responses (24h) + individual documents (permanent)

This enables any MCP-compatible AI assistant to intelligently discover and load relevant skills with minimal context overhead and fast startup. See Architecture Guide for details.

Skill Sources

Load skills from GitHub repositories (direct skills or Claude Code plugins) or local directories.

By default, loads from:

Error Handling

The server is designed to be resilient:

  • If a local folder is inaccessible, it logs a warning and continues
  • If a GitHub repo fails to load, it tries alternate branches and continues
  • If no skills are loaded, the server exits with an error message

Troubleshooting

Cursor Startup Timeout

Problem: Cursor shows "MCP server failed to start" or timeout errors.

Solution: Run the setup script before configuring Cursor:

curl -sSL https://raw.githubusercontent.com/K-Dense-AI/claude-skills-mcp/main/setup-cursor.sh | bash

This pre-downloads all dependencies (~250 MB) so Cursor can start the server quickly (5-10 seconds).

Why this happens: On first run, uvx needs to download dependencies including PyTorch (~150 MB) and sentence-transformers (~50 MB), which can take 60-180 seconds and exceed Cursor's startup timeout.

Manual Pre-Caching

If you prefer not to use the setup script, you can manually pre-cache:

uvx claude-skills-mcp --help

Wait for it to complete (~60-120 seconds), then configure Cursor normally.

Slow First Search

Expected behavior: The first search after server startup takes 3-5 seconds to load the embedding model (~100 MB download, one-time).

This is normal and only happens once. The model is cached permanently at ~/.cache/huggingface/.

Skills Not Loading

Problem: Search returns no results or "No skills loaded".

Check:

  1. Network connectivity (for GitHub sources)
  2. GitHub API rate limits (60 requests/hour without token)
  3. Verbose logs: uvx claude-skills-mcp --verbose

Solution: Use local skills or wait for rate limit reset.

Docker Deployment

Building Docker Image

docker build -t claude-skills-mcp -f Dockerfile.glama .

Running with Docker

docker run -it claude-skills-mcp

The optimized Dockerfile uses CPU-only PyTorch to minimize image size and build time while maintaining full functionality.

Development

Installation from Source

git clone https://github.com/your-org/claude-skills-mcp.git
cd claude-skills-mcp
uv sync

Running in Development

uv run claude-skills-mcp

Running with Verbose Logging

uvx claude-skills-mcp --verbose

Running Tests

# Run all tests (with coverage - runs automatically)
uv run pytest tests/

# Run only unit tests (fast)
uv run pytest tests/ -m "not integration"

# Run local demo (creates temporary skills)
uv run pytest tests/test_integration.py::test_local_demo -v -s

# Run repository demo (loads from K-Dense-AI scientific skills)
uv run pytest tests/test_integration.py::test_repo_demo -v -s

# Generate HTML coverage report
uv run pytest tests/ --cov-report=html
open htmlcov/index.html

Note: Coverage reporting is enabled by default. All test runs show coverage statistics.

See Testing Guide for more details.

Command Line Options

uvx claude-skills-mcp [OPTIONS]

Options:
  --config PATH         Path to configuration JSON file
  --example-config      Print default configuration (with comments) and exit
  --verbose, -v         Enable verbose logging
  --help               Show help message

Contributing

Contributions are welcome! To contribute:

  1. Report issues: Open an issue for bugs or feature requests
  2. Submit PRs: Fork, create a feature branch, ensure tests pass (uv run pytest tests/), then submit
  3. Code style: Run uvx ruff check src/ before committing
  4. Add tests: New features should include tests

For questions, email orion.li@k-dense.ai

Documentation

  • Usage Examples - Advanced configuration, real-world use cases, and custom skill creation
  • Testing Guide - Complete testing instructions, CI/CD, and coverage analysis
  • Roadmap - Future features and planned enhancements

Roadmap

We're working on MCP Sampling, sandboxed execution, binary support, and skill workflows. See our detailed roadmap for technical specifications.

Learn More

License

This project is licensed under the Apache License 2.0.

Copyright 2025 K-Dense AI (https://k-dense.ai)

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured