Foundry MCP

Foundry MCP

Enables spec-driven development workflows with AI assistants, providing tools for managing specification lifecycles, task dependencies, code navigation, testing, and automated reviews through a unified CLI and MCP interface.

Category
Visit Server

README

foundry-mcp

Python 3.10+ License: MIT MCP Compatible Development Status

Turn AI coding assistants into reliable software engineers with structured specs, progress tracking, and automated review.

Table of Contents

Why foundry-mcp?

The problem: AI coding assistants are powerful but unreliable on complex tasks. They lose context mid-feature, skip steps without warning, and deliver inconsistent results across sessions.

The solution: foundry-mcp provides the scaffolding to break work into specs, track progress, and verify outputs—so your AI assistant delivers like a professional engineer.

  • No more lost context — Specs persist state across sessions so the AI picks up where it left off.
  • No more skipped steps — Task dependencies and blockers ensure nothing gets missed.
  • No more guessing progress — See exactly what's done, what's blocked, and what's next.
  • No more manual review — AI review validates implementation against spec requirements.

Key Features

  • Specs keep AI on track — Break complex work into phases and tasks the AI can complete without losing context.
  • Progress you can see — Track what's done, what's blocked, and what's next across multi-session work.
  • AI-powered review — LLM integration reviews specs, generates PR descriptions, and validates implementation.
  • Works with your tools — Runs as MCP server (Claude Code, Gemini CLI) or standalone CLI with JSON output.
  • Security built in — Workspace scoping, API key auth, rate limits, and audit logging ship by default.
  • Discovery-first — Capabilities declared in a manifest so clients negotiate features automatically.

Installation

Prerequisites

  • Python 3.10 or higher
  • macOS, Linux, or Windows
  • MCP-compatible client (e.g., Claude Code)

Install with uvx (recommended)

uvx foundry-mcp

Install with pip

pip install foundry-mcp

Install from source (development)

git clone https://github.com/tylerburleigh/foundry-mcp.git
cd foundry-mcp
pip install -e ".[test]"

Quick Start

1. Install the claude-foundry plugin (from within Claude Code):

/plugin marketplace add foundry-works/claude-foundry
/plugin install foundry@claude-foundry

Restart Claude Code and trust the repository when prompted.

Note: The plugin automatically registers the MCP server using uvx — no separate installation needed.

2. Run setup:

Please run foundry-setup to configure the workspace.

3. Start building:

I want to add user authentication with JWT tokens.

Claude creates a spec with phases, tasks, and verification steps. Ask to implement and it works through tasks in dependency order.

How It Works

foundry-mcp is the MCP server that provides the underlying tools and APIs. The claude-foundry plugin provides the user-facing skills that orchestrate workflows.

You → Claude Code → claude-foundry plugin → foundry-mcp server
         │                  │                      │
         ▼                  ▼                      ▼
      Natural          Skills like            MCP tools for
      language         foundry-spec,          specs, tasks,
      requests         foundry-implement      reviews, etc.
Component Role
foundry-mcp MCP server + CLI providing spec/task/review tools
claude-foundry Claude Code plugin providing skills and workflow

For most users, install both and interact through natural language. The plugin handles tool orchestration automatically.

Configuration

API Keys

foundry-mcp uses LLM providers for AI-powered features like spec review, consensus, and deep research. Set the API keys for providers you want to use:

# AI CLI tools (for AI review, consensus)
export CLAUDE_CODE_OAUTH_TOKEN="..."   # Get via: claude setup-token
export GEMINI_API_KEY="..."
export OPENAI_API_KEY="sk-..."
export CURSOR_API_KEY="key-..."

# Deep research providers (for /foundry-research deep workflow)
export TAVILY_API_KEY="..."
export PERPLEXITY_API_KEY="..."

TOML Configuration (Optional)

For advanced settings, copy the sample config to your project:

cp samples/foundry-mcp.toml ./foundry-mcp.toml

Advanced Usage

Direct MCP Configuration (without plugin)

For MCP clients other than Claude Code, or if you prefer manual configuration:

{
  "mcpServers": {
    "foundry-mcp": {
      "command": "uvx",
      "args": ["foundry-mcp"],
      "env": {
        "FOUNDRY_MCP_SPECS_DIR": "/path/to/specs"
      }
    }
  }
}

<details> <summary>Using a pip installation instead?</summary>

{
  "mcpServers": {
    "foundry-mcp": {
      "command": "foundry-mcp",
      "env": {
        "FOUNDRY_MCP_SPECS_DIR": "/path/to/specs"
      }
    }
  }
}

</details>

CLI Usage

All MCP tools are also available via CLI with JSON output:

# Get next task to work on
python -m foundry_mcp.cli task next --specs-dir ./specs

# Validate a spec
python -m foundry_mcp.cli spec validate my-feature-001

# Create a new spec
python -m foundry_mcp.cli authoring create --name "my-feature" --template detailed

Launch as Standalone MCP Server

foundry-mcp

The server advertises its capabilities, feature flags, and response contract so MCP clients (Claude Code, Gemini CLI, etc.) can connect automatically.

Documentation

User guides

Guide Description
Quick Start Get up and running in 5 minutes
Core Concepts Understand specs, phases, and tasks
Workflow Guide End-to-end development workflows
CLI Reference Complete CLI command documentation
MCP Tool Reference All MCP tools and their parameters
Configuration Environment variables and TOML setup
Troubleshooting Common issues and solutions

Concepts

Guide Description
SDD Philosophy Why spec-driven development matters
Response Envelope Standardized response format
Spec Schema Spec file structure and fields
LLM Configuration Provider setup and fallbacks

Developer docs

Guide Description
Dev Docs Index Entry point for developer documentation
MCP Best Practices Canonical implementation checklist
Response Schema Standardized envelope reference
CLI Output Contract JSON-first CLI expectations

Scope and Limitations

Best for:

  • Multi-step feature development with AI assistants
  • Teams wanting structured handoff between AI and human reviewers
  • Projects requiring audit trails and progress visibility

Not suited for:

  • Quick one-off code changes (use your AI assistant directly)
  • Non-software tasks (specs are code-focused)
  • Fully autonomous AI agents (foundry assumes human oversight)

Testing

pytest                                        # Full suite
pytest tests/integration/test_mcp_smoke.py    # MCP smoke tests
pytest tests/integration/test_mcp_tools.py    # Tool contract coverage
  • Regression tests keep MCP/CLI adapters aligned across surfaces.
  • Golden fixtures (tests/fixtures/golden) ensure response envelopes, error semantics, and pagination never regress.
  • Freshness checks run alongside core unit and integration suites.

Contributing

Contributions are welcome! Please read the MCP Best Practices before submitting PRs. All changes should keep specs, docs, code, and fixtures in sync.

License

MIT License — see LICENSE for details.


Built by Tyler Burleigh · Report an Issue · View on GitHub

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured