Discover Awesome MCP Servers
Extend your agent with 23,495 capabilities via MCP servers.
- All23,495
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
OSRS-STAT
A Model Context Protocol (MCP) server that provides real-time player statistics and ranking data of 'Old School RuneScape', supporting multiple game modes and player comparison functions.
Remote MCP Server
A Cloudflare Workers-based MCP server that enables tool integration with Claude AI through OAuth login, allowing users to extend Claude's capabilities with custom tools like mathematical operations.
Agent Interviews
Agent Interviews
doit-mcp-server
Servidor MCP para doit (pydoit)
Tanda Workforce MCP Server
Integrates Tanda Workforce API with AI assistants to manage employee schedules, timesheets, leave requests, clock in/out operations, and workforce analytics through natural language with OAuth2 authentication.
Hyperliquid MCP Server v2
A Model Context Protocol server for Hyperliquid with integrated dashboard
Example Next.js MCP Server
A drop-in Model Context Protocol server implementation for Next.js projects that enables AI tools, prompts, and resources integration using the Vercel MCP Adapter.
NetworkX Graph MCP Server
Enables creation and management of state/decision graphs using NetworkX, supporting directed graphs with conditional branches, edge ordering, path finding, and visualization through PNG exports or interactive web interface.
FluentLab Funding Assistant
Provides access to FluentLab's funding database, enabling users to search for funding opportunities and retrieve document checklists required for specific funding programme applications.
Browser Control MCP
Um servidor MCP emparelhado com uma extensão do Firefox que permite que clientes LLM controlem o navegador do usuário, com suporte para gerenciamento de abas, pesquisa no histórico e leitura de conteúdo.
Pokémon VGC Damage Calculator MCP Server
An MCP-compliant server that enables AI agents to perform accurate Pokémon battle damage calculations using the Smogon calculator, supporting comprehensive input handling for Pokémon stats, moves, abilities, and field conditions.
MCP All-in-One Server
A versatile MCP server providing tools for arithmetic operations, n8n webhook integration, and access to a customer support playbook resource. It also includes specialized prompt templates for converting webinar transcripts into engaging blog posts.
mcpserver-ts
Here's a basic MCP (Minimal, Complete, and Verifiable) server template in TypeScript for quick mock data, along with explanations to help you understand and adapt it: ```typescript // Dependencies import express, { Request, Response } from 'express'; import cors from 'cors'; // Optional, but useful for local development import bodyParser from 'body-parser'; // For parsing request bodies // Configuration const port = process.env.PORT || 3000; // Use environment variable or default to 3000 const app = express(); // Middleware app.use(cors()); // Enable CORS for all origins (for development - be careful in production) app.use(bodyParser.json()); // Parse JSON request bodies // Mock Data (Replace with your actual data) const mockData = [ { id: 1, name: 'Item 1', description: 'This is the first item.' }, { id: 2, name: 'Item 2', description: 'This is the second item.' }, { id: 3, name: 'Item 3', description: 'This is the third item.' }, ]; // Routes app.get('/items', (req: Request, res: Response) => { res.json(mockData); }); app.get('/items/:id', (req: Request, res: Response) => { const id = parseInt(req.params.id); // Convert ID to number const item = mockData.find(item => item.id === id); if (item) { res.json(item); } else { res.status(404).json({ message: 'Item not found' }); } }); app.post('/items', (req: Request, res: Response) => { const newItem = { id: mockData.length + 1, // Simple ID generation (not suitable for production) ...req.body, }; mockData.push(newItem); res.status(201).json(newItem); // 201 Created }); app.put('/items/:id', (req: Request, res: Response) => { const id = parseInt(req.params.id); const itemIndex = mockData.findIndex(item => item.id === id); if (itemIndex !== -1) { mockData[itemIndex] = { ...mockData[itemIndex], ...req.body, id: id }; // Update with request body res.json(mockData[itemIndex]); } else { res.status(404).json({ message: 'Item not found' }); } }); app.delete('/items/:id', (req: Request, res: Response) => { const id = parseInt(req.params.id); const itemIndex = mockData.findIndex(item => item.id === id); if (itemIndex !== -1) { mockData.splice(itemIndex, 1); res.status(204).send(); // 204 No Content (successful deletion) } else { res.status(404).json({ message: 'Item not found' }); } }); // Start the server app.listen(port, () => { console.log(`Server listening on port ${port}`); }); ``` **Explanation:** 1. **Dependencies:** - `express`: The core web framework for Node.js. - `cors`: Middleware to enable Cross-Origin Resource Sharing (CORS). Crucial for allowing your frontend (running on a different port) to access the server. **Important:** In production, you should configure CORS to only allow specific origins for security. - `body-parser`: Middleware to parse the body of incoming requests. Specifically, `bodyParser.json()` parses JSON data. 2. **Configuration:** - `port`: Sets the port the server will listen on. It tries to use the `PORT` environment variable (useful for deployment) and defaults to 3000 if the environment variable isn't set. - `app = express()`: Creates an Express application instance. 3. **Middleware:** - `app.use(cors())`: Enables CORS for all origins. **Use with caution in production!** Configure it properly. - `app.use(bodyParser.json())`: Tells Express to use the `body-parser` middleware to parse JSON request bodies. This is necessary so you can access data sent in the body of `POST`, `PUT`, and `PATCH` requests. 4. **Mock Data:** - `mockData`: This is where you define your sample data. It's a simple JavaScript array of objects. **Replace this with your actual mock data.** You can load this data from a file (e.g., a JSON file) if you have a lot of data. 5. **Routes:** - `app.get('/items', ...)`: Defines a route that handles `GET` requests to `/items`. It returns the entire `mockData` array as a JSON response. - `app.get('/items/:id', ...)`: Defines a route that handles `GET` requests to `/items/:id`, where `:id` is a route parameter. It extracts the `id` from the URL, finds the corresponding item in `mockData`, and returns it as a JSON response. If the item is not found, it returns a 404 (Not Found) error. - `app.post('/items', ...)`: Handles `POST` requests to `/items`. It creates a new item by taking the data from the request body (`req.body`), assigning it a new ID, adding it to the `mockData` array, and returning the new item with a 201 (Created) status code. - `app.put('/items/:id', ...)`: Handles `PUT` requests to `/items/:id`. It updates an existing item. It finds the item by ID, merges the data from the request body into the existing item, and returns the updated item. If the item is not found, it returns a 404 error. - `app.delete('/items/:id', ...)`: Handles `DELETE` requests to `/items/:id`. It deletes an item. It finds the item by ID, removes it from the `mockData` array, and returns a 204 (No Content) status code. 6. **Start the Server:** - `app.listen(port, ...)`: Starts the Express server and listens for incoming requests on the specified port. The callback function logs a message to the console indicating that the server is running. **How to Use:** 1. **Install Node.js and npm:** Make sure you have Node.js and npm (Node Package Manager) installed. 2. **Create a Project Directory:** Create a new directory for your project. 3. **Initialize the Project:** ```bash npm init -y ``` 4. **Install Dependencies:** ```bash npm install express cors body-parser typescript @types/express @types/node --save-dev ``` 5. **Create `tsconfig.json`:** Create a `tsconfig.json` file in your project root to configure the TypeScript compiler. A basic configuration would be: ```json { "compilerOptions": { "target": "es6", "module": "commonjs", "outDir": "./dist", "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true }, "include": ["src/**/*"], "exclude": ["node_modules"] } ``` 6. **Create `src/index.ts`:** Create a directory named `src` and save the TypeScript code above as `src/index.ts`. 7. **Compile the TypeScript:** ```bash npx tsc ``` This will compile your TypeScript code into JavaScript files in the `dist` directory. 8. **Run the Server:** ```bash node dist/index.js ``` Or, if you want to use `ts-node` for direct execution (without compiling first, useful for development): ```bash npm install -g ts-node # Install globally (if you haven't already) ts-node src/index.ts ``` 9. **Test the API:** Use a tool like `curl`, `Postman`, or a browser to test your API endpoints. For example: - `GET http://localhost:3000/items` - `GET http://localhost:3000/items/1` - `POST http://localhost:3000/items` (with a JSON body) **Key Improvements and Considerations:** * **TypeScript:** Using TypeScript provides type safety and helps catch errors early. * **CORS:** `cors` is essential for local development when your frontend is running on a different port. **Configure CORS properly for production!** Don't use `cors()` without any options in a production environment. * **Error Handling:** The example includes basic error handling (e.g., 404 Not Found). You should add more robust error handling for a production application. * **ID Generation:** The ID generation in the `POST` route is very basic and not suitable for production. Use a proper ID generation strategy (e.g., UUIDs, database auto-increment). * **Data Persistence:** This example uses in-memory data. For a real application, you'll need to connect to a database (e.g., MongoDB, PostgreSQL). * **Validation:** You should validate the data in the request body to ensure it's in the correct format and meets your requirements. Libraries like `joi` or `express-validator` can help with this. * **Environment Variables:** Use environment variables for configuration (e.g., database connection strings, API keys). * **Logging:** Add logging to your application to help with debugging and monitoring. * **Testing:** Write unit tests and integration tests to ensure your API is working correctly. * **Asynchronous Operations:** For more complex operations (e.g., database access), use asynchronous functions (`async/await`) to avoid blocking the event loop. **Portuguese Translation of Key Terms:** * **Server:** Servidor * **Template:** Modelo * **Mock Data:** Dados simulados / Dados de teste * **TypeScript:** TypeScript (the name remains the same) * **Dependencies:** Dependências * **Configuration:** Configuração * **Middleware:** Middleware (the name remains the same) * **Routes:** Rotas * **Request:** Requisição * **Response:** Resposta * **Port:** Porta * **Error:** Erro * **Not Found:** Não encontrado * **Created:** Criado * **No Content:** Sem conteúdo * **Environment Variables:** Variáveis de ambiente * **Logging:** Registro (de logs) * **Testing:** Teste * **Asynchronous:** Assíncrono This template provides a solid foundation for building a mock API server in TypeScript. Remember to adapt it to your specific needs and add the necessary features for your application.
Semantic Scholar MCP Server
Semantic Scholar API, providing comprehensive access to academic paper data, author information, and citation networks.
MCP demo (DeepSeek as Client's LLM)
Okay, here's how you can run a minimal client-server demo using the MCP (Message Passing Communication) protocol with the DeepSeek API. This will be a simplified example to illustrate the basic concepts. Keep in mind that a real-world application would likely be more complex. **Important Considerations:** * **DeepSeek API Key:** You'll need a DeepSeek API key to access their models. Make sure you have one and that it's properly configured in your environment. Refer to the DeepSeek documentation for how to obtain and use your API key. * **Python:** This example uses Python. Make sure you have Python 3.6 or later installed. * **Libraries:** You'll need to install the `requests` library for making HTTP requests to the DeepSeek API. You might also want to use `Flask` for a simple server. ```bash pip install requests Flask ``` **Conceptual Overview:** 1. **Client:** The client sends a request (e.g., a text prompt) to the server. 2. **Server:** The server receives the request, calls the DeepSeek API with the prompt, gets the response from DeepSeek, and sends the response back to the client. 3. **MCP (Simplified):** In this example, we'll use HTTP as a simple form of MCP. The client sends an HTTP request, and the server sends an HTTP response. A more robust MCP implementation might use a dedicated messaging queue or other communication mechanisms. **Code Example (Python):** **1. Server (server.py):** ```python from flask import Flask, request, jsonify import requests import os app = Flask(__name__) # Replace with your DeepSeek API key (or set it as an environment variable) DEEPSEEK_API_KEY = os.environ.get("DEEPSEEK_API_KEY") # Get API key from environment variable DEEPSEEK_API_URL = "https://api.deepseek.com/v1/chat/completions" # Replace with the correct DeepSeek API endpoint @app.route('/deepseek', methods=['POST']) def deepseek_request(): try: data = request.get_json() prompt = data.get('prompt') if not prompt: return jsonify({'error': 'No prompt provided'}), 400 # Construct the DeepSeek API request headers = { "Content-Type": "application/json", "Authorization": f"Bearer {DEEPSEEK_API_KEY}" } payload = { "model": "deepseek-chat", # Or the specific model you want to use "messages": [{"role": "user", "content": prompt}], "max_tokens": 150 # Adjust as needed } # Make the request to the DeepSeek API response = requests.post(DEEPSEEK_API_URL, headers=headers, json=payload) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) deepseek_data = response.json() # Extract the response from DeepSeek (adjust based on the API's response format) try: answer = deepseek_data['choices'][0]['message']['content'] except (KeyError, IndexError) as e: print(f"Error extracting content from DeepSeek response: {e}") print(f"DeepSeek Response: {deepseek_data}") return jsonify({'error': 'Error processing DeepSeek response'}), 500 return jsonify({'response': answer}) except requests.exceptions.RequestException as e: print(f"Error communicating with DeepSeek API: {e}") return jsonify({'error': f'Error communicating with DeepSeek API: {e}'}), 500 except Exception as e: print(f"An unexpected error occurred: {e}") return jsonify({'error': f'An unexpected error occurred: {e}'}), 500 if __name__ == '__main__': app.run(debug=True, port=5000) # Run the server on port 5000 ``` **2. Client (client.py):** ```python import requests import json SERVER_URL = "http://localhost:5000/deepseek" # Adjust if your server is running elsewhere def send_request(prompt): try: payload = {'prompt': prompt} headers = {'Content-Type': 'application/json'} response = requests.post(SERVER_URL, data=json.dumps(payload), headers=headers) response.raise_for_status() # Raise HTTPError for bad responses data = response.json() return data.get('response') except requests.exceptions.RequestException as e: print(f"Error connecting to the server: {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None if __name__ == '__main__': prompt = "What is the capital of France?" response = send_request(prompt) if response: print(f"DeepSeek's Response: {response}") else: print("Failed to get a response from the server.") ``` **How to Run:** 1. **Set your API Key:** Make sure you have set the `DEEPSEEK_API_KEY` environment variable. For example, in Linux/macOS: ```bash export DEEPSEEK_API_KEY="YOUR_DEEPSEEK_API_KEY" ``` Or in Windows: ```bash set DEEPSEEK_API_KEY=YOUR_DEEPSEEK_API_KEY ``` 2. **Start the Server:** Open a terminal and run: ```bash python server.py ``` The server should start and listen on port 5000. 3. **Run the Client:** Open another terminal and run: ```bash python client.py ``` The client will send the prompt to the server, the server will call the DeepSeek API, and the client will print the response. **Explanation:** * **Server (server.py):** * Uses Flask to create a simple web server. * The `/deepseek` route handles POST requests. * It extracts the prompt from the request body. * It constructs a request to the DeepSeek API, including your API key and the prompt. * It sends the request to the DeepSeek API using the `requests` library. * It parses the response from DeepSeek and extracts the generated text. **Important:** The exact structure of the DeepSeek API response may vary, so you'll need to adjust the code accordingly. Refer to the DeepSeek API documentation. * It sends the response back to the client as a JSON object. * Includes error handling for API request failures and other potential issues. * **Client (client.py):** * Sends a POST request to the server's `/deepseek` endpoint with the prompt in the request body. * Receives the response from the server and prints the generated text. * Includes basic error handling. **Important Notes and Improvements:** * **Error Handling:** The error handling in this example is basic. You should add more robust error handling to catch potential issues, such as network errors, API errors, and invalid responses. * **API Key Security:** Storing your API key directly in the code is not recommended for production environments. Use environment variables or a more secure method for managing your API key. The example uses `os.environ.get("DEEPSEEK_API_KEY")` which is a better practice. * **Asynchronous Communication:** For more complex applications, consider using asynchronous communication (e.g., `asyncio` in Python) to improve performance and responsiveness. * **MCP Implementation:** This example uses HTTP as a simplified form of MCP. For a more robust MCP implementation, you could use a message queue (e.g., RabbitMQ, Kafka) or a dedicated messaging library. * **DeepSeek API Documentation:** Always refer to the official DeepSeek API documentation for the most up-to-date information on API endpoints, request parameters, and response formats. * **Model Selection:** The `model` parameter in the DeepSeek API request specifies which model to use. Choose the appropriate model based on your needs. * **Token Limits:** Be aware of the token limits for the DeepSeek API. The `max_tokens` parameter controls the maximum number of tokens in the generated response. * **Rate Limiting:** The DeepSeek API may have rate limits. Implement appropriate rate limiting in your code to avoid exceeding the limits. * **Flask Debug Mode:** `debug=True` in `app.run()` is useful for development but should be disabled in production. This example provides a starting point for building a client-server application using the DeepSeek API. Remember to adapt the code to your specific requirements and to consult the DeepSeek API documentation for the most accurate information.
Apple Doc MCP
A Model Context Protocol server that provides AI coding assistants with direct access to Apple's Developer Documentation, enabling seamless lookup of frameworks, symbols, and detailed API references.
mcp-reticle
The Wireshark for the Model Context Protocol (Reticle) intercepts, visualises, and profiles MCP JSON-RPC traffic in real time — designed for microsecond-level overhead.
Trusted GMail MCP Server
Primeiro servidor MCP confiável executado no ambiente de execução confiável AWS Nitro Enclave.
Databento MCP
A Model Context Protocol server that provides access to Databento's historical and real-time market data, including trades, OHLCV bars, and order book depth. It enables AI assistants to perform financial data analysis, manage batch jobs, and convert market data between DBN and Parquet formats.
MCP Predictive Market
Aggregates prediction market data from 5 major platforms (Manifold, Polymarket, Metaculus, PredictIt, Kalshi), enabling users to search markets, compare odds across platforms, detect arbitrage opportunities, and track predictions through natural language.
Audius MCP Server
Permite a interação com a API da plataforma de música Audius, suportando operações de usuário, faixa e playlist através do Protocolo de Contexto de Modelo.
YouTube MCP Server
Enables AI models to interact with YouTube content through video search, channel information, transcripts, comments, trending videos, and content analysis tools including quiz and flashcard generation.
Name.com MCP Server
Enables AI assistants to perform comprehensive domain management tasks through the Name.com API, including registration, DNS management, and transfers. It dynamically generates tools from the OpenAPI specification to facilitate natural language interaction with all Name.com services.
Desktop MCP
Enables AI assistants to capture and analyze screen content across multi-monitor setups with smart image optimization. Provides screenshot capabilities and detailed monitor information for visual debugging, UI analysis, and desktop assistance.
Cube.js MCP Server
Enables AI assistants to query and analyze data from Cube.js analytics platforms, allowing natural language access to cubes, measures, dimensions, and complex analytics queries.
Hugging Face MCP Server
An MCP server that provides Hugging Face Hub API and Search endpoints through multiple transport protocols (STDIO, SSE, StreamableHTTP, and StreamableHTTPJson), enabling integration with AI model capabilities.
Dataproc MCP Server
Enables management of Google Cloud Dataproc clusters and jobs through 22 production-ready tools, featuring intelligent parameter injection, semantic search capabilities, and enterprise-grade security for seamless big data operations.
Gmail MCP Server
Connects to Gmail accounts to search emails, retrieve full email content, and parse receipts from services like Swiggy, Zomato, and Uber with read-only access.
MCP Server Demo
A minimal Model Context Protocol server demo that exposes tools through HTTP API, including greeting, weather lookup, and HTTP request capabilities. Demonstrates MCP server implementation with stdio communication and HTTP gateway functionality.
Google PSE MCP Server
A Model Context Protocol server that enables LLM clients like VSCode, Copilot, and Claude Desktop to search the web using Google Programmable Search Engine API.