FastApply MCP Server
Enables AI-powered code editing with preview and validation capabilities through FastApply language models. Features automatic backups, atomic file operations, and support for multiple FastApply-compatible backends like LM Studio and Ollama.
README
FastApply MCP Server
A Model Context Protocol server that provides AI-powered code editing capabilities through FastApply integration.
Overview
FastApply MCP Server enables intelligent code editing by connecting MCP-compatible clients to FastApply language models. The server provides two core tools for applying code changes with AI assistance, featuring automatic backup management and comprehensive validation.
Features
- AI-guided code editing through FastApply models
- Dry-run preview mode for safe change validation
- Automatic backup system with environment-based control
- Atomic file operations with optimistic concurrency
- Comprehensive input validation and security checks
- Support for multiple FastApply-compatible backends
Installation
Requirements
- Python 3.13 or higher
- FastApply-compatible server (LM Studio, Ollama, or custom OpenAI-compatible endpoint)
Setup
Using uvx (Recommended)
Run directly without installation:
uvx fastapply-mcp
Manual Installation
Clone the repository and install dependencies:
git clone https://github.com/your-org/fastapply-mcp.git
cd fastapply-mcp
# Using uv
uv sync
source .venv/bin/activate
uv pip install -e .
# Or using pip
pip install -e .
Create a .env file with your configuration:
cp .env.example .env
Configuration
Configure the server through environment variables in your .env file:
# FastApply Server Configuration
FAST_APPLY_URL=http://localhost:1234/v1
FAST_APPLY_MODEL=fastapply-1.5b
FAST_APPLY_TIMEOUT=300.0
FAST_APPLY_MAX_TOKENS=8000
FAST_APPLY_TEMPERATURE=0.05
# Security Settings
MAX_FILE_SIZE=10485760
# Backup Control (default: disabled)
FAST_APPLY_AUTO_BACKUP=False
Backup System
The automatic backup feature is disabled by default. To enable automatic backups before file modifications:
FAST_APPLY_AUTO_BACKUP=True
When enabled, the server creates timestamped backups in the format {filename}.bak_{timestamp} before applying changes.
MCP Integration
Claude Desktop
Add the server to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Using uvx (Recommended)
{
"mcpServers": {
"fastapply": {
"command": "uvx",
"args": ["fastapply-mcp"],
"env": {
"FAST_APPLY_URL": "http://localhost:1234/v1",
"FAST_APPLY_MODEL": "fastapply-1.5b"
}
}
}
}
Manual Installation
{
"mcpServers": {
"fastapply": {
"command": "python",
"args": ["/path/to/fastapply-mcp/src/fastapply/main.py"],
"env": {
"FAST_APPLY_URL": "http://localhost:1234/v1",
"FAST_APPLY_MODEL": "fastapply-1.5b"
}
}
}
}
The server operates on the current working directory where the MCP client is running, similar to other MCP tools.
Other MCP Clients
The server implements the standard MCP protocol and works with any compatible client. Refer to your client's documentation for integration instructions.
Available Tools
edit_file
Applies AI-guided code edits to a target file with comprehensive validation and safety checks.
Parameters:
target_file(required): Path to the file to editinstructions(required): Natural language description of desired changescode_edit(required): Code snippet or edit instructionsforce(optional): Override safety checks and optimistic concurrencyoutput_format(optional): Response format, either "text" or "json"
Features:
- Atomic file operations with rollback capability
- SHA-256 content verification for optimistic concurrency
- Automatic syntax validation for supported languages
- Optional automatic backup creation
- Unified diff generation for change visualization
Example:
{
"target_file": "src/utils.py",
"instructions": "Add error handling to the parse_config function",
"code_edit": "def parse_config(path):\n try:\n with open(path) as f:\n return json.load(f)\n except FileNotFoundError:\n raise ConfigError(f'Config file not found: {path}')\n except json.JSONDecodeError as e:\n raise ConfigError(f'Invalid JSON in config: {e}')"
}
dry_run_edit_file
Previews code edits without modifying the target file, allowing safe validation of changes.
Parameters:
target_file(required): Path to the file to previewinstruction(optional): Natural language description of desired changescode_edit(required): Code snippet or edit instructionsoutput_format(optional): Response format, either "text" or "json"
Features:
- Complete edit preview with unified diff
- Validation results without file modification
- First 20 lines of merged code preview
- Safety information and warnings
Example:
{
"target_file": "src/utils.py",
"code_edit": "def parse_config(path):\n try:\n with open(path) as f:\n return json.load(f)\n except Exception as e:\n raise ConfigError(f'Failed to parse config: {e}')"
}
FastApply Backend Options
The server supports multiple FastApply-compatible backends:
LM Studio
Download and run FastApply models through LM Studio's GUI:
- Install LM Studio from https://lmstudio.ai
- Download a FastApply-compatible model
- Start the local server (default: http://localhost:1234)
- Configure FAST_APPLY_URL in your environment
Ollama
Run FastApply models through Ollama's CLI:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a FastApply model
ollama pull fastapply-1.5b
# Start the server
ollama serve
Configure FAST_APPLY_URL to point to your Ollama instance.
Custom OpenAI-Compatible Servers
Any server implementing the OpenAI API specification can be used as a backend. Configure the appropriate URL and model identifier in your environment.
Security
The server implements multiple security layers:
- Workspace Isolation: All file operations are confined to the current working directory
- Path Validation: Strict path resolution prevents directory traversal attacks
- File Size Limits: Configurable maximum file size prevents resource exhaustion
- Input Sanitization: Comprehensive validation of all user inputs
- Atomic Operations: File changes are atomic with automatic rollback on failure
Development
Project Structure
fastapply-mcp/
├── src/
│ └── fastapply-mcp/
│ ├── __init__.py
│ └── main.py # Core server implementation
├── .env.example
├── pyproject.toml
└── README.md
Code Quality
The project uses standard Python tooling for code quality:
# Format code
ruff format .
# Lint code
ruff check .
# Type checking
mypy src/
Troubleshooting
Connection Issues
Verify your FastApply server is accessible:
curl http://localhost:1234/v1/models
Check the server logs for connection errors and verify your FAST_APPLY_URL configuration.
Permission Errors
Ensure the server process has appropriate file system permissions for the current working directory:
pwd
ls -la
Performance Issues
For large files or complex edits, consider:
- Increasing FAST_APPLY_TIMEOUT
- Adjusting FAST_APPLY_MAX_TOKENS
- Reducing FAST_APPLY_TEMPERATURE for more deterministic output
Contributing
Contributions are welcome. Please follow these guidelines:
- Fork the repository and create a feature branch
- Write tests for new functionality
- Ensure all tests pass and code meets quality standards
- Submit a pull request with a clear description of changes
License
MIT License - see LICENSE file for details.
Support
- GitHub Issues: Report bugs and request features
- Discussions: Ask questions and share ideas
- Documentation: Refer to inline code documentation for implementation details
Acknowledgments
This project integrates with FastApply models and implements the Model Context Protocol specification. Thanks to the MCP community and FastApply model developers for their foundational work.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.