42Crunch MCP Server
Enables AI assistants to interact with the 42Crunch API security platform to search collections, retrieve API definitions, and access security assessments for OpenAPI specifications.
README
42crunch MCP Server
A Model Context Protocol (MCP) server that enables AI assistants to interact with the 42crunch API security platform. This server provides tools to search collections, retrieve API definitions, and access security assessments.
Features
- List Collections: Search and retrieve API collections from 42crunch
- Get Collection APIs: Retrieve all APIs within a specific collection
- Get API Details: Access detailed API information including OpenAPI definitions, assessments, and scan results
- Web UI: Streamlit-based chat interface with LangChain for natural language interaction
Prerequisites
- Python 3.11 or higher
- A 42crunch account and API token
- (Optional) LLM API key for UI (OpenAI, Claude, or Gemini)
Quick Start
1. Installation
# Clone or navigate to the project
cd mcp
# Install server dependencies
pip install -r requirements.txt
# Install UI dependencies
cd ui
pip install -r requirements.txt
cd ..
2. Configuration
Create a .env file in the project root:
# Required: 42crunch API token
42C_TOKEN=your_42crunch_api_token_here
Get your API token from the 42crunch platform.
3. Start the MCP Server
Option A: Using the startup script (Recommended)
./.ci/start_mcp_server.sh
Option B: Manual start
python http_main.py
The server will start on http://localhost:8000 by default.
Verify server is running:
curl http://localhost:8000/health
4. Start the UI
Option A: From project root
cd ui
streamlit run app.py
Option B: From UI directory
streamlit run ui/app.py
The UI will open in your browser at http://localhost:8501.
5. Configure UI API Key
The UI needs an LLM API key (choose one provider):
OpenAI:
# Option 1: Add to ~/.ai_tokens
echo "OPENAI_API_KEY=your_key" >> ~/.ai_tokens
# Option 2: Environment variable
export OPENAI_API_KEY=your_key
Claude (Anthropic):
echo "ANTHROPIC_API_KEY=your_key" >> ~/.ai_tokens
Gemini (Google):
echo "GOOGLE_API_KEY=your_key" >> ~/.ai_tokens
Or enter the API key directly in the UI sidebar.
Complete Setup Guide
Installation
Running the MCP Server
The server can run in two modes:
1. Stdio Mode (Default - for MCP clients)
Run the server directly in the foreground:
python main.py
Or use the module directly:
python -m src.server
2. HTTP Mode (for web/remote clients)
Start the HTTP server:
# Default (port 8000)
python http_main.py
# Or use the startup script
./.ci/start_mcp_server.sh
Custom host/port:
python http_main.py --host 0.0.0.0 --port 8000
Development mode (auto-reload):
python http_main.py --reload
HTTP Server Endpoints:
POST /jsonrpc- JSON-RPC 2.0 endpointGET /health- Health checkGET /tools- List available toolsGET /docs- Interactive API documentation (Swagger UI)GET /redoc- Alternative API documentation (ReDoc)
Running the UI
The UI is a Streamlit application that uses LangChain to interact with the MCP server.
Start the UI:
# From project root
cd ui
streamlit run app.py
# Or from project root
streamlit run ui/app.py
The UI will be available at http://localhost:8501
UI Features:
- 🤖 Multi-provider LLM support (OpenAI, Claude, Gemini)
- 🛠️ Automatic tool calling based on user queries
- 💬 Chat interface with tool call visibility
- ⚙️ Configurable MCP server URL and API keys
UI Configuration:
- MCP Server URL: Default
http://localhost:8000(configurable in sidebar) - LLM Provider: Choose from OpenAI, Claude, or Gemini
- API Keys: Set in
~/.ai_tokens, environment variables, or UI sidebar
See UI README for detailed UI documentation.
Background/Daemon Mode
Run the server as a daemon in the background:
# Using helper script (recommended)
./.ci/start_mcp_server.sh
# Or using command-line options
python http_main.py --daemon --pidfile ./42crunch-mcp.pid
Helper Scripts (in .ci/ directory):
./.ci/start_mcp_server.sh- Start HTTP server in background./.ci/stop_mcp_server.sh- Stop running server./.ci/status_mcp_server.sh- Check server status
Legacy Scripts (in scripts/ directory):
scripts/start_daemon.sh- Start stdio server as daemonscripts/stop_daemon.sh- Stop running daemonscripts/status_daemon.sh- Check daemon status
Systemd Service (Linux)
For production deployments, use the provided systemd service file:
# Copy service file
sudo cp 42crunch-mcp.service /etc/systemd/system/
# Edit the service file to match your installation paths
sudo nano /etc/systemd/system/42crunch-mcp.service
# Reload systemd
sudo systemctl daemon-reload
# Enable and start service
sudo systemctl enable 42crunch-mcp
sudo systemctl start 42crunch-mcp
# Check status
sudo systemctl status 42crunch-mcp
# View logs
sudo journalctl -u 42crunch-mcp -f
Note: MCP servers typically communicate via stdio (stdin/stdout). When running as a daemon, ensure your MCP client is configured to connect via the appropriate transport mechanism (named pipes, sockets, or HTTP if implemented).
MCP Tools
The server exposes three tools:
1. list_collections
List all API collections with pagination support.
Parameters:
page(optional, int): Page number (default: 1)per_page(optional, int): Items per page (default: 10)order(optional, str): Sort order (default: "default")sort(optional, str): Sort field (default: "default")
Example:
result = list_collections(page=1, per_page=20)
2. get_collection_apis
Get all APIs within a specific collection.
Parameters:
collection_id(required, str): Collection UUIDwith_tags(optional, bool): Include tags in response (default: True)
Example:
result = get_collection_apis(
collection_id="3dae40d4-0f8b-42f9-bc62-2a2c8a3189ac",
with_tags=True
)
3. get_api_details
Get detailed information about a specific API.
Parameters:
api_id(required, str): API UUIDbranch(optional, str): Branch name (default: "main")include_definition(optional, bool): Include OpenAPI definition (default: True)include_assessment(optional, bool): Include assessment data (default: True)include_scan(optional, bool): Include scan results (default: True)
Example:
result = get_api_details(
api_id="75ec1f35-8261-402f-8240-1a29fbcb7179",
branch="main",
include_definition=True,
include_assessment=True,
include_scan=True
)
Project Structure
mcp/
├── .env # Environment variables (42C_TOKEN)
├── requirements.txt # Python dependencies
├── README.md # This file
├── src/
│ ├── __init__.py
│ ├── server.py # Main MCP server implementation
│ ├── client.py # 42crunch API client
│ └── config.py # Configuration management
└── tests/
├── __init__.py
├── test_server.py # Server tests
└── test_client.py # API client tests
Configuration
The server uses environment variables for configuration:
42C_TOKEN: Your 42crunch API token (required)
The token is automatically loaded from the .env file or environment variables.
Error Handling
The server handles various error scenarios:
- Authentication failures: Returns error message if token is invalid
- Rate limiting: Handles 429 responses appropriately
- Invalid IDs: Validates collection and API IDs before making requests
- Network errors: Provides clear error messages for connection issues
All tools return a response dictionary with:
success: Boolean indicating if the operation succeededdata: Response data (if successful)error: Error message (if failed)error_type: Type of error (if failed)
Testing
Unit Tests
Run unit tests using pytest:
pytest tests/
For verbose output:
pytest tests/ -v
Integration Tests
Integration tests assume the servers are running. Start them first:
Start servers:
# Terminal 1: Start HTTP server
python http_main.py
# Terminal 2: Start MCP stdio server (for stdio tests)
python main.py
Run tests:
# Run all unit tests
pytest tests/unit/ -v
# Run all integration tests
pytest tests/integration/ -v
# Test HTTP server only
pytest tests/integration/test_http.py -v
# Test MCP stdio server only
pytest tests/integration/test_mcp.py -v
# Test both servers (comparison tests)
pytest tests/integration/test_combined.py -v
# Run all tests
pytest tests/ -v
# Run with specific options
pytest tests/integration/test_http.py --api-id <uuid> -v
pytest tests/integration/test_mcp.py --skip-collection-tests -v
Integration test options:
--skip-collection-tests- Skip collection-related tests--skip-api-tests- Skip API-related tests--api-id <uuid>- Provide API ID for testing get_api_details
Integration Tests with Test Client
Use the provided test client to test the MCP server end-to-end:
Full test suite:
python test_client.py
Test with specific IDs:
# Test with a collection ID
python test_client.py --collection-id <collection-uuid>
# Test with an API ID
python test_client.py --api-id <api-uuid>
# Test with both
python test_client.py --collection-id <uuid> --api-id <uuid>
Simple quick test:
python test_simple.py
Test HTTP server:
# First, start the HTTP server in another terminal:
python http_main.py
# Then in another terminal, run the HTTP test client:
python tests/clients/test_http_client.py
# Or test with specific IDs:
python tests/clients/test_http_client.py --collection-id <uuid> --api-id <uuid>
Test MCP stdio server:
# Run the stdio test client (starts server automatically):
python tests/clients/test_client.py
# Or test with specific IDs:
python tests/clients/test_client.py --collection-id <uuid> --api-id <uuid>
The test clients will:
- Start/connect to the MCP server
- Send JSON-RPC requests (via stdio or HTTP)
- Read responses
- Validate the responses
- Report test results
Example output:
============================================================
42crunch MCP Server Test Client
============================================================
Starting MCP server: python main.py
Server started (PID: 12345)
============================================================
TEST: list_collections
============================================================
📤 Request: {
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "list_collections",
"arguments": {
"page": 1,
"per_page": 10
}
}
}
📥 Response: {
"jsonrpc": "2.0",
"id": 1,
"result": {
"success": true,
"data": {...}
}
}
✅ list_collections succeeded
Development
Adding New Tools
To add a new MCP tool:
- Add the corresponding method to
src/client.py - Create a tool function in
src/server.pyusing the@mcp.tool()decorator - Wire up the client method to the tool
- Add tests in
tests/test_server.py
Code Style
This project follows PEP 8 style guidelines. Consider using a formatter like black or ruff for consistent code style.
License
[Add your license here]
Support
For issues related to:
- 42crunch API: Contact 42crunch support
- MCP Server: Open an issue in this repository
References
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.