Fledge MCP Server
Connects Fledge functionality to Cursor AI, allowing interaction with Fledge instances via natural language commands.
Krupalp525
README
Fledge MCP Server
This is a Model Context Protocol (MCP) server that connects Fledge functionality to Cursor AI, allowing the AI to interact with Fledge instances via natural language commands.
Prerequisites
- Fledge installed locally or accessible via API (default: http://localhost:8081)
- Cursor AI installed
- Python 3.8+
Installation
- Clone this repository:
git clone https://github.com/Krupalp525/fledge-mcp.git
cd fledge-mcp
- Install the dependencies:
pip install -r requirements.txt
Running the Server
- Make sure Fledge is running:
fledge start
- Start the MCP server:
python mcp_server.py
For secure operation with API key authentication:
python secure_mcp_server.py
- Verify it's working by accessing the health endpoint:
curl http://localhost:8082/health
You should receive "Fledge MCP Server is running" as the response.
Connecting to Cursor
-
In Cursor, go to Settings > MCP Servers
-
Add a new server:
- URL: http://localhost:8082/tools
- Tools file: Upload the included tools.json or point to its local path
-
For the secure server, configure the "X-API-Key" header with the value from the api_key.txt file that is generated when the secure server starts.
-
Test it: Open Cursor's Composer (Ctrl+I), type "Check if Fledge API is reachable," and the AI should call the
validate_api_connection
tool.
Available Tools
Data Access and Management
- get_sensor_data: Fetch sensor data from Fledge with optional filtering by time range and limit
- list_sensors: List all sensors available in Fledge
- ingest_test_data: Ingest test data into Fledge, with optional batch count
Service Control
- get_service_status: Get the status of all Fledge services
- start_stop_service: Start or stop a Fledge service by type
- update_config: Update Fledge configuration parameters
Frontend Code Generation
- generate_ui_component: Generate React components for Fledge data visualization
- fetch_sample_frontend: Get sample frontend templates for different frameworks
- suggest_ui_improvements: Get AI-powered suggestions for improving UI code
Real-Time Data Streaming
- subscribe_to_sensor: Set up a subscription to sensor data updates
- get_latest_reading: Get the most recent reading from a specific sensor
Debugging and Validation
- validate_api_connection: Check if the Fledge API is reachable
- simulate_frontend_request: Test API requests with different methods and payloads
Documentation and Schema
- get_api_schema: Get information about available Fledge API endpoints
- list_plugins: List available Fledge plugins
Advanced AI-Assisted Features
- generate_mock_data: Generate realistic mock sensor data for testing
Testing the API
You can test the server using the included test scripts:
# For standard server
python test_mcp.py
# For secure server with API key
python test_secure_mcp.py
Security Options
The secure server (secure_mcp_server.py) adds API key authentication:
- On first run, it generates an API key stored in api_key.txt
- All requests must include this key in the X-API-Key header
- Health check endpoint remains accessible without authentication
Example API Requests
# Validate API connection
curl -X POST -H "Content-Type: application/json" -d '{"name": "validate_api_connection"}' http://localhost:8082/tools
# Generate mock data
curl -X POST -H "Content-Type: application/json" -d '{"name": "generate_mock_data", "parameters": {"sensor_id": "temp1", "count": 5}}' http://localhost:8082/tools
# Generate React chart component
curl -X POST -H "Content-Type: application/json" -d '{"name": "generate_ui_component", "parameters": {"component_type": "chart", "sensor_id": "temp1"}}' http://localhost:8082/tools
# For secure server, add API key header
curl -X POST -H "Content-Type: application/json" -H "X-API-Key: YOUR_API_KEY" -d '{"name": "list_sensors"}' http://localhost:8082/tools
Extending the Server
To add more tools:
- Add the tool definition to
tools.json
- Implement the tool handler in
mcp_server.py
andsecure_mcp_server.py
Production Considerations
For production deployment:
- Use HTTPS
- Deploy behind a reverse proxy like Nginx
- Implement more robust authentication (JWT, OAuth)
- Add rate limiting
- Set up persistent data storage for subscriptions
Deploying on Smithery.ai
The Fledge MCP Server can be deployed on Smithery.ai for enhanced scalability and availability. Follow these steps to deploy:
-
Prerequisites
- Docker installed on your local machine
- A Smithery.ai account
- The Smithery CLI tool installed
-
Build and Deploy
# Build the Docker image docker build -t fledge-mcp . # Deploy to Smithery.ai smithery deploy
-
Configuration The
smithery.json
file contains the configuration for your deployment:- WebSocket transport on port 8082
- Configurable Fledge API URL
- Tool definitions and parameters
- Timeout settings
-
Environment Variables Set the following environment variables in your Smithery.ai dashboard:
FLEDGE_API_URL
: Your Fledge API endpointAPI_KEY
: Your secure API key (if using secure mode)
-
Verification After deployment, verify your server is running:
smithery status fledge-mcp
-
Monitoring Monitor your deployment through the Smithery.ai dashboard:
- Real-time logs
- Performance metrics
- Error tracking
- Resource usage
-
Updating To update your deployment:
# Build new image docker build -t fledge-mcp . # Deploy updates smithery deploy --update
JSON-RPC Protocol Support
The server implements the Model Context Protocol (MCP) using JSON-RPC 2.0 over WebSocket. The following methods are supported:
-
initialize
{ "jsonrpc": "2.0", "method": "initialize", "params": {}, "id": "1" }
Response:
{ "jsonrpc": "2.0", "result": { "serverInfo": { "name": "fledge-mcp", "version": "1.0.0", "description": "Fledge Model Context Protocol (MCP) Server", "vendor": "Fledge", "capabilities": { "tools": true, "streaming": true, "authentication": "api_key" } }, "configSchema": { "type": "object", "properties": { "fledge_api_url": { "type": "string", "description": "Fledge API URL", "default": "http://localhost:8081/fledge" } } } }, "id": "1" }
-
tools/list
{ "jsonrpc": "2.0", "method": "tools/list", "params": {}, "id": "2" }
Response: Returns the list of available tools and their parameters.
-
tools/call
{ "jsonrpc": "2.0", "method": "tools/call", "params": { "name": "get_sensor_data", "parameters": { "sensor_id": "temp1", "limit": 10 } }, "id": "3" }
Error Codes
The server follows standard JSON-RPC 2.0 error codes:
- -32700: Parse error
- -32600: Invalid Request
- -32601: Method not found
- -32602: Invalid params
- -32000: Server error
Recommended Servers

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
AIO-MCP Server
🚀 All-in-one MCP server with AI search, RAG, and multi-service integrations (GitLab/Jira/Confluence/YouTube) for AI-enhanced development workflows. Folk from
Persistent Knowledge Graph
An implementation of persistent memory for Claude using a local knowledge graph, allowing the AI to remember information about users across conversations with customizable storage location.
Hyperbrowser MCP Server
Welcome to Hyperbrowser, the Internet for AI. Hyperbrowser is the next-generation platform empowering AI agents and enabling effortless, scalable browser automation. Built specifically for AI developers, it eliminates the headaches of local infrastructure and performance bottlenecks, allowing you to
React MCP
react-mcp integrates with Claude Desktop, enabling the creation and modification of React apps based on user prompts

Any OpenAI Compatible API Integrations
Integrate Claude with Any OpenAI SDK Compatible Chat Completion API - OpenAI, Perplexity, Groq, xAI, PyroPrompts and more.
Exa MCP
A Model Context Protocol server that enables AI assistants like Claude to perform real-time web searches using the Exa AI Search API in a safe and controlled manner.
BigQuery
This is a server that lets your LLMs (like Claude) talk directly to your BigQuery data! Think of it as a friendly translator that sits between your AI assistant and your database, making sure they can chat securely and efficiently.