MCP Server Cookiecutter Template
Easy Manageable Way to create your own MCP Server
shubhamgupta-dat
README
MCP Server Cookiecutter Template
A comprehensive Cookiecutter template for creating Model Context Protocol (MCP) servers with Python.
What is MCP?
Model Context Protocol (MCP) is an open standard that lets applications provide context for Large Language Models (LLMs) like Claude in a standardized way. MCP servers can expose:
- Tools: Functions that LLMs can execute to perform actions
- Resources: Data sources that LLMs can access for context
- Prompts: Reusable templates for LLM interactions
Features
- 🚀 Complete project structure with all necessary components pre-configured
- 🔧 Sample implementations of tools, resources, and prompts
- 📦 FastMCP integration for simple decorator-based development
- 🧩 Lifecycle management with proper resource initialization and cleanup
- 🐍 Modern Python practices with type hints and documentation
- 🛠️ Development tools including Makefile, testing, and type checking
- 🐋 Docker support for containerized deployment
- 🔌 Claude Desktop integration for seamless testing with Claude
Quick Start
Requirements
- Python 3.12+
- Cookiecutter
Creating a Project
# Install cookiecutter if needed
pip install cookiecutter
# Generate a new MCP server project from the template
cookiecutter gh:shubhamgupta-dat/mcp-server-cookiecutter
# Or from local copy
cookiecutter path/to/mcp-server-cookiecutter
Getting Started with Your New Server
# Navigate to your new project
cd your-project-name
# Set up the environment (uses uv under the hood)
make setup
# Activate the virtual environment
source .venv/bin/activate # On Unix/MacOS
# or
.venv\Scripts\activate # On Windows
# Run the server in development mode with the MCP Inspector
make dev
# Install the server in Claude Desktop (for local testing)
make install
File Structure
mcp-server-template/
├── cookiecutter.json
└── {{cookiecutter.project_name}}/
├── README.md
├── Makefile
├── Dockerfile
├── .gitignore
├── pyproject.toml
└── {{cookiecutter.module_name}}/
├── __init__.py
├── server.py
├── config.py
├── tools/
│ ├── __init__.py
│ └── sample_tools.py
├── resources/
│ ├── __init__.py
│ └── sample_resources.py
└── prompts/
├── __init__.py
└── sample_prompts.py
File Contents
cookiecutter.json
{
"project_name": "mcp-server",
"module_name": "{{ cookiecutter.project_name.lower().replace('-', '_') }}",
"project_description": "A Model Context Protocol (MCP) server",
"author_name": "Your Name",
"author_email": "your.email@example.com",
"version": "0.1.0",
"python_version": "3.10"
}
{{cookiecutter.project_name}}/README.md
# {{cookiecutter.project_name}}
{{cookiecutter.project_description}}
## Overview
This is a Model Context Protocol (MCP) server that exposes tools, resources, and prompts for use with LLM applications like Claude. MCP servers let you extend AI applications with custom functionality, data sources, and templated interactions.
## Quick Start
### Setup with uv (Recommended)
```bash
# Set up the environment
make setup
# Activate the virtual environment
source .venv/bin/activate # On Unix/MacOS
# or
.venv\Scripts\activate # On Windows
# Run the server in development mode with the MCP Inspector
make dev
# Install the server in Claude Desktop
make install
Manual Setup
# Install uv if you don't have it
pip install uv
# Create a virtual environment
uv venv
# Activate the virtual environment
source .venv/bin/activate # On Unix/MacOS
# or
.venv\Scripts\activate # On Windows
# Install the package in development mode
uv pip install -e .
# Run in development mode
mcp dev {{cookiecutter.module_name}}.server
# Install in Claude Desktop
mcp install {{cookiecutter.module_name}}.server
Docker
Build and run using Docker:
# Build the Docker image
make docker-build
# or
docker build -t {{cookiecutter.project_name}} .
# Run the container
make docker-run
# or
docker run -p 8000:8000 {{cookiecutter.project_name}}
Server Architecture
The server is organized into several components:
server.py
: Main MCP server setup and configurationconfig.py
: Configuration managementtools/
: Tool implementations (functions that LLMs can execute)resources/
: Resource implementations (data that LLMs can access)prompts/
: Prompt template implementations (reusable conversation templates)
MCP Features
This server implements all three MCP primitives:
-
Tools: Functions that the LLM can call to perform actions
- Example:
calculate
,fetch_data
,long_task
- Example:
-
Resources: Data sources that provide context to the LLM
- Example:
static://example
,dynamic://{parameter}
,config://{section}
- Example:
-
Prompts: Reusable templates for LLM interactions
- Example:
simple_prompt
,structured_prompt
,data_analysis_prompt
- Example:
Adding Your Own Components
Adding a New Tool
Create or modify files in the tools/
directory:
@mcp.tool()
def my_custom_tool(param1: str, param2: int = 42) -> str:
"""
A custom tool that does something useful.
Args:
param1: Description of first parameter
param2: Description of second parameter with default value
Returns:
Description of the return value
"""
# Your implementation here
return f"Result: {param1}, {param2}"
Adding a New Resource
Create or modify files in the resources/
directory:
@mcp.resource("my-custom-resource://{param}")
def my_custom_resource(param: str) -> str:
"""
A custom resource that provides useful data.
Args:
param: Description of the parameter
Returns:
The resource content
"""
# Your implementation here
return f"Resource content for: {param}"
Adding a New Prompt
Create or modify files in the prompts/
directory:
@mcp.prompt()
def my_custom_prompt(param: str) -> str:
"""
A custom prompt template.
Args:
param: Description of the parameter
Returns:
The formatted prompt
"""
return f"""
# Custom Prompt Template
Context: {param}
Please respond with your analysis of the above context.
"""
Configuration
The server supports configuration via:
-
Environment Variables: Prefix with
MCP_
(e.g.,MCP_API_KEY=xyz123
)- Nested config: Use double underscores (
MCP_DATABASE__HOST=localhost
)
- Nested config: Use double underscores (
-
Config File: Specify via
MCP_CONFIG_FILE
environment variable
Example config:
{
"api": {
"key": "xyz123",
"url": "https://api.example.com"
},
"database": {
"host": "localhost",
"port": 5432
}
}
Development
# Run tests
make test
# Format code
make format
# Type checking
make type-check
# Clean up build artifacts
make clean
License
[Include your license information here]
### {{cookiecutter.project_name}}/Makefile
```makefile
.PHONY: setup run dev install deploy test clean format type-check
# Set the Python version from cookiecutter or default to 3.10
PYTHON_VERSION := {{cookiecutter.python_version}}
# Setup with uv
setup:
# Check if uv is installed, install if not
@which uv >/dev/null || pip install uv
# Create a virtual environment
uv venv
# Install dependencies with development extras
uv pip install -e ".[dev]"
@echo "✅ Environment setup complete. Activate it with 'source .venv/bin/activate' (Unix/macOS) or '.venv\\Scripts\\activate' (Windows)"
# Run the server directly
run:
python -m {{cookiecutter.module_name}}.server
# Run in development mode with MCP inspector
dev:
mcp dev {{cookiecutter.module_name}}.server
# Install in Claude Desktop
install:
mcp install {{cookiecutter.module_name}}.server
# Run tests
test:
pytest
# Format code with black and isort
format:
black {{cookiecutter.module_name}}
isort {{cookiecutter.module_name}}
# Check types with mypy
type-check:
mypy {{cookiecutter.module_name}}
# Clean up build artifacts
clean:
rm -rf build/
rm -rf dist/
rm -rf *.egg-info/
find . -type d -name __pycache__ -exec rm -rf {} +
find . -type f -name "*.pyc" -delete
# Docker build
docker-build:
docker build -t {{cookiecutter.project_name}}:latest .
# Run with Docker
docker-run:
docker run -p 8000:8000 {{cookiecutter.project_name}}:latest
{{cookiecutter.project_name}}/Dockerfile
# Use a specific Python version
FROM python:{{cookiecutter.python_version}}-slim AS builder
# Set build arguments
ARG APP_USER=mcp
ARG APP_UID=1000
ARG APP_GID=1000
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Set up the application user
RUN groupadd -g $APP_GID $APP_USER && \
useradd -m -u $APP_UID -g $APP_GID -s /bin/bash $APP_USER
# Set the working directory
WORKDIR /app
# Install uv for dependency management
RUN pip install --no-cache-dir uv
# Copy project files
COPY pyproject.toml README.md ./
# Copy the application code
COPY {{cookiecutter.module_name}} ./{{cookiecutter.module_name}}
# Install the application in development mode
RUN uv pip install --no-cache-dir -e .
# Switch to a smaller final image
FROM python:{{cookiecutter.python_version}}-slim
# Copy from builder
COPY --from=builder /usr/local/lib/python{{cookiecutter.python_version}}/site-packages /usr/local/lib/python{{cookiecutter.python_version}}/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
COPY --from=builder /app /app
# Set working directory
WORKDIR /app
# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV MCP_ENV=production
# Expose port for HTTP-based transports (SSE)
EXPOSE 8000
# Run the server using the production transport (SSE)
CMD ["python", "-m", "{{cookiecutter.module_name}}.server"]
{{cookiecutter.project_name}}/.gitignore
# Python bytecode
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
# Distribution / packaging
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Virtual environments
venv/
env/
ENV/
.venv/
.env/
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
pytest_cache/
# Editor files
.idea/
.vscode/
*.swp
*.swo
*~
# OS specific
.DS_Store
Thumbs.db
# Project specific
*.log
.env
.env.*
!.env.example
# MCP specific
claude_desktop_config.json
{{cookiecutter.project_name}}/pyproject.toml
[build-system]
requires = ["setuptools>=42", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "{{cookiecutter.project_name}}"
version = "{{cookiecutter.version}}"
description = "{{cookiecutter.project_description}}"
authors = [
{name = "{{cookiecutter.author_name}}", email = "{{cookiecutter.author_email}}"},
]
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"mcp>=1.0",
"httpx>=0.24.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0",
"black>=23.0",
"isort>=5.0",
"mypy>=1.0",
"ruff>=0.2.0",
]
[tool.setuptools]
packages = ["{{cookiecutter.module_name}}"]
[tool.black]
line-length = 88
target-version = ["py310"]
[tool.isort]
profile = "black"
line_length = 88
[tool.mypy]
python_version = "3.10"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = true
disallow_incomplete_defs = true
[tool.ruff]
line-length = 88
target-version = "py310"
select = ["E", "F", "I"]
{{cookiecutter.module_name}}/init.py
"""{{cookiecutter.project_description}}"""
__version__ = "{{cookiecutter.version}}"
{{cookiecutter.module_name}}/server.py
"""
Main MCP server implementation.
This file initializes the FastMCP server and imports all tools, resources, and prompts.
"""
from contextlib import asynccontextmanager
from collections.abc import AsyncIterator
from dataclasses import dataclass
from mcp.server.fastmcp import Context, FastMCP
# Import config management
from .config import load_config
@dataclass
class AppContext:
"""
Type-safe application context container.
Store any application-wide state or connections here.
"""
config: dict
@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
"""
Application lifecycle manager.
Handles startup and shutdown operations with proper resource management.
Args:
server: The FastMCP server instance
Yields:
The application context with initialized resources
"""
# Load configuration
config = load_config()
# Initialize connections and resources
print("🚀 Server starting up...")
try:
# Create and yield the app context
yield AppContext(config=config)
finally:
# Clean up resources on shutdown
print("🛑 Server shutting down...")
# Create the MCP server with lifespan support
mcp = FastMCP(
"{{cookiecutter.project_name}}", # Server name
lifespan=app_lifespan, # Lifecycle manager
dependencies=["mcp>=1.0"], # Required dependencies
)
# Import all tools, resources, and prompts
# These imports must come after the MCP server is initialized
from .tools.sample_tools import *
from .resources.sample_resources import *
from .prompts.sample_prompts import *
# Make the server instance accessible to other modules
server = mcp
if __name__ == "__main__":
# When executed directly, run the server
mcp.run()
{{cookiecutter.module_name}}/config.py
"""
Configuration management for the MCP server.
Handles loading configuration from environment variables and/or config files.
"""
import os
import json
from pathlib import Path
from typing import Dict, Any, Optional
def load_config(config_path: Optional[str] = None) -> Dict[str, Any]:
"""
Load configuration from environment variables and/or config file.
Environment variables with the prefix MCP_ are automatically included
in the configuration (with the prefix removed and name lowercased).
Args:
config_path: Optional path to a JSON config file
Returns:
A dictionary containing configuration values
"""
# Start with empty config
config = {
"server": {
"name": "{{cookiecutter.project_name}}",
"version": "{{cookiecutter.version}}"
}
}
# Load from file if provided
if config_path:
config_file = Path(config_path)
if config_file.exists():
try:
with open(config_file, 'r') as f:
file_config = json.load(f)
# Deep merge configs
_deep_merge(config, file_config)
except Exception as e:
print(f"Warning: Failed to load config file: {e}")
# Also check for config file path from environment
env_config_path = os.environ.get("MCP_CONFIG_FILE")
if env_config_path and env_config_path != config_path:
try:
with open(env_config_path, 'r') as f:
file_config = json.load(f)
# Deep merge configs
_deep_merge(config, file_config)
except Exception as e:
print(f"Warning: Failed to load config file from environment: {e}")
# Override with environment variables (convert MCP_* to config entries)
env_config = {}
for key, value in os.environ.items():
if key.startswith('MCP_') and key != "MCP_CONFIG_FILE":
# Remove MCP_ prefix, convert to lowercase, and split by double underscore
config_key_parts = key[4:].lower().split('__')
# Convert to nested dictionaries
current = env_config
for part in config_key_parts[:-1]:
if part not in current:
current[part] = {}
current = current[part]
# Try to convert value to appropriate type
value = _convert_value(value)
# Set the value
current[config_key_parts[-1]] = value
# Merge environment config
_deep_merge(config, env_config)
return config
def _convert_value(value: str) -> Any:
"""
Try to convert a string value to an appropriate type.
Args:
value: The string value to convert
Returns:
The converted value
"""
# Handle booleans
if value.lower() in ('true', 'yes', '1'):
return True
if value.lower() in ('false', 'no', '0'):
return False
# Handle null
if value.lower() in ('null', 'none'):
return None
# Handle numbers
try:
# Try as int
return int(value)
except ValueError:
try:
# Try as float
return float(value)
except ValueError:
# Keep as string
return value
def _deep_merge(target: Dict[str, Any], source: Dict[str, Any]) -> None:
"""
Deep merge two dictionaries.
Args:
target: The target dictionary to merge into
source: The source dictionary to merge from
"""
for key, value in source.items():
if key in target and isinstance(target[key], dict) and isinstance(value, dict):
# Recursively merge dictionaries
_deep_merge(target[key], value)
else:
# Otherwise, simply overwrite the value
target[key] = value
{{cookiecutter.module_name}}/tools/init.py
"""MCP tools package."""
{{cookiecutter.module_name}}/tools/sample_tools.py
"""
Sample MCP tools implementation.
This file contains example tool implementations to demonstrate different patterns.
"""
from .. import server
from ..server import mcp, Context
import asyncio
@mcp.tool()
def echo(message: str) -> str:
"""
Echo a message back.
This is a simple example tool that echoes a message back to the caller.
Args:
message: The message to echo back
Returns:
The same message that was provided
"""
return f"Echo: {message}"
@mcp.tool()
async def calculate(a: float, b: float, operation: str = "add") -> float:
"""
Perform a calculation on two numbers.
This is an example of a tool that performs calculations with multiple parameters
and a default value.
Args:
a: First number
b: Second number
operation: The operation to perform (add, subtract, multiply, divide)
Returns:
The result of the calculation
"""
if operation == "add":
return a + b
elif operation == "subtract":
return a - b
elif operation == "multiply":
return a * b
elif operation == "divide":
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
else:
raise ValueError(f"Unknown operation: {operation}")
@mcp.tool()
async def long_task(iterations: int, ctx: Context) -> str:
"""
A long-running task that reports progress.
This example demonstrates how to use the Context object to report progress
during a long-running operation.
Args:
iterations: Number of iterations to perform
ctx: The Context object (automatically injected)
Returns:
A completion message
"""
# Log the start of the operation
ctx.info(f"Starting long task with {iterations} iterations")
for i in range(iterations):
# Log progress for debugging
ctx.debug(f"Processing iteration {i+1}/{iterations}")
# Report progress to the client
await ctx.report_progress(
i,
iterations,
message=f"Processing iteration {i+1}/{iterations}"
)
# Simulate work
await asyncio.sleep(0.1)
# Log completion
ctx.info("Long task completed")
return f"Completed {iterations} iterations"
@mcp.tool()
async def fetch_data(url: str, ctx: Context) -> str:
"""
Fetch data from a URL.
This example demonstrates how to make HTTP requests and handle errors.
Args:
url: The URL to fetch data from
ctx: The Context object (automatically injected)
Returns:
The fetched data or an error message
"""
import httpx
try:
# Log the request
ctx.info(f"Fetching data from {url}")
# Make the HTTP request
async with httpx.AsyncClient() as client:
response = await client.get(url, timeout=10.0)
response.raise_for_status()
# Return the response text
return response.text
except httpx.RequestError as e:
# Handle connection errors
error_msg = f"Connection error: {str(e)}"
ctx.error(error_msg)
return error_msg
except httpx.HTTPStatusError as e:
# Handle HTTP errors
error_msg = f"HTTP error {e.response.status_code}: {e.response.reason_phrase}"
ctx.error(error_msg)
return error_msg
except Exception as e:
# Handle unexpected errors
error_msg = f"Unexpected error: {str(e)}"
ctx.error(error_msg)
return error_msg
{{cookiecutter.module_name}}/resources/init.py
"""MCP resources package."""
{{cookiecutter.module_name}}/resources/sample_resources.py
"""
Sample MCP resources implementation.
This file contains example resource implementations to demonstrate different patterns.
"""
from .. import server
from ..server import mcp
import json
import datetime
@mcp.resource("static://example")
def static_resource() -> str:
"""
An example static resource.
This demonstrates a basic static resource that always returns the same content.
Returns:
The static content
"""
return """
# Example Static Resource
This is an example of a static resource that returns the same content every time.
* It can contain Markdown
* With lists
* And other formatting
```python
# Even code blocks
def hello():
return "world"
```
"""
@mcp.resource("dynamic://{parameter}")
def dynamic_resource(parameter: str) -> str:
"""
An example dynamic resource with a parameter.
This demonstrates how to create a parameterized resource with a dynamic URI.
Args:
parameter: A parameter extracted from the URI
Returns:
Content based on the parameter
"""
return f"""
# Dynamic Resource: {parameter}
This resource was generated with the parameter: **{parameter}**
Current time: {datetime.datetime.now().isoformat()}
"""
@mcp.resource("config://{section}")
def config_resource(section: str) -> str:
"""
An example resource that uses the app context.
This demonstrates how to access the app context in a resource.
Args:
section: The configuration section to return
Returns:
The configuration section content
"""
# Access the app context
ctx = mcp.get_request_context()
config = ctx.lifespan_context.config
# Check if the section exists
if section in config:
return f"""
# Configuration: {section}
```json
{json.dumps(config[section], indent=2)}
```
"""
else:
return f"""
# Configuration: {section}
Section not found. Available sections:
{', '.join(config.keys()) if config else 'No configuration sections available'}
"""
@mcp.resource("file://{path}.md")
def file_resource(path: str) -> str:
"""
An example resource that reads from the filesystem.
This demonstrates how to create a resource that reads from the filesystem.
Note: In production, you should validate the path and restrict access.
Args:
path: The file path
Returns:
The file content
"""
import os
from pathlib import Path
# For security, restrict to a subdirectory
base_dir = os.environ.get("RESOURCE_DIR", "resources")
file_path = Path(base_dir) / f"{path}.md"
# Validate the path (prevent directory traversal)
if not file_path.is_file() or ".." in path:
return f"File not found: {path}.md"
# Read and return the file content
try:
with open(file_path, "r", encoding="utf-8") as f:
return f.read()
except Exception as e:
return f"Error reading file: {str(e)}"
{{cookiecutter.module_name}}/prompts/init.py
"""MCP prompts package."""
{{cookiecutter.module_name}}/prompts/sample_prompts.py
"""
Sample MCP prompts implementation.
This file contains example prompt templates to demonstrate different patterns.
"""
from .. import server
from ..server import mcp
from mcp.server.fastmcp.prompts import base
@mcp.prompt()
def simple_prompt(query: str) -> str:
"""
A simple text prompt.
This demonstrates the simplest form of prompt that returns a string template.
Args:
query: The user's query or input
Returns:
A formatted prompt string
"""
return f"""
Please provide a detailed answer to the following question:
{query}
Take your time to think step by step and provide a comprehensive response.
"""
@mcp.prompt()
def structured_prompt(code: str, language: str = "python") -> list[base.Message]:
"""
A more structured prompt using Message objects.
This demonstrates how to create a structured conversation prompt with
multiple messages in a sequence.
Args:
code: The code to review
language: The programming language of the code
Returns:
A list of prompt messages
"""
return [
base.UserMessage(f"I need help reviewing this {language} code:"),
base.UserMessage(f"```{language}\n{code}\n```"),
base.AssistantMessage("I'll analyze this code for you. What specific aspects would you like me to focus on?"),
base.UserMessage("Please focus on code quality, potential bugs, and performance issues.")
]
@mcp.prompt()
def data_analysis_prompt(data: str, objective: str) -> list[base.Message]:
"""
A prompt for data analysis tasks.
This demonstrates a more complex prompt for data analysis.
Args:
data: The data to analyze
objective: The analysis objective
Returns:
A list of prompt messages
"""
return [
base.UserMessage("I need help analyzing some data."),
base.UserMessage(f"Objective: {objective}"),
base.UserMessage("Here's the data:"),
base.UserMessage(data),
base.UserMessage("Please analyze this data and provide insights that address my objective.")
]
@mcp.prompt()
def image_analysis_prompt(image_description: str, analysis_type: str = "general") -> str:
"""
A prompt for image analysis.
This demonstrates a prompt that could be used with image data.
Args:
image_description: A description of the image
analysis_type: The type of analysis to perform
Returns:
A formatted prompt string
"""
analysis_instructions = {
"general": "Provide a general description and analysis of what you see in the image.",
"technical": "Provide a technical analysis of the image, focusing on composition, lighting, and techniques used.",
"content": "Analyze the content of the image, identifying objects, people, and activities.",
"sentiment": "Analyze the mood and emotional impact of the image."
}
instruction = analysis_instructions.get(
analysis_type,
analysis_instructions["general"]
)
return f"""
I'm showing you an image with the following description:
{image_description}
{instruction}
"""
Usage Instructions
To use this cookiecutter template:
-
Install Cookiecutter:
pip install cookiecutter
-
Create a new project: Either use a local copy of this template or upload it to GitHub and use:
cookiecutter path/to/template/directory # or if hosted on GitHub: cookiecutter gh:username/mcp-server-template
-
Answer the prompts to customize your project.
-
Set up your environment:
cd your-project-name make setup source .venv/bin/activate # On Unix/MacOS # or .venv\Scripts\activate # On Windows
-
Run in development mode:
make dev
-
Install in Claude Desktop:
make install
Enjoy building MCP servers!
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor

Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.

Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.