Discover Awesome MCP Servers
Extend your agent with 20,542 capabilities via MCP servers.
- All20,542
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
EVM MCP Server
Jokes MCP Server
An MCP server that delivers jokes on demand, supporting different joke categories like Chuck Norris and Dad jokes, which can be integrated with Microsoft Copilot Studio.
Missive MCP Server
An MCP server that integrates with Missive App allowing users to create, manage, and send email drafts through AI tools like Claude Desktop or N8N.
domain-lookup-mcp
Um mini servidor MCP para pesquisa de nomes de domínio.
Tavily Web Search MCP Server
Enables web search capabilities through the Tavily API, allowing users to search the internet for information using natural language queries. Serves as a demonstration and educational project for building MCP servers with external API integrations.
Miro MCP
Servidor Miro MCP, expondo todas as funcionalidades disponíveis no SDK oficial do Miro.
YaVendió Tools
An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.
A Model Context Protocol Server for the UK Science Museum Group API
Servidor MCP para o Grupo do Museu de Ciência do Reino Unido
Model Context Protocol (MCP)
Okay, here's a breakdown of a working pattern for SSE (Server-Sent Events) based MCP (Microservice Communication Protocol) clients and servers, leveraging the Gemini LLM (Large Language Model). This pattern focuses on how to use SSE for real-time communication between microservices, with Gemini potentially playing a role in data transformation, enrichment, or decision-making within the microservice architecture. **Core Concepts:** * **Microservices:** A distributed application architecture composed of small, independent, and loosely coupled services. * **MCP (Microservice Communication Protocol):** A standardized way for microservices to communicate. This could be a custom protocol or a well-established one like gRPC, REST, or, in this case, SSE. The key is consistency and clarity. * **SSE (Server-Sent Events):** A unidirectional communication protocol where the server pushes updates to the client over a single HTTP connection. It's ideal for real-time data streams. * **Gemini LLM:** A powerful language model that can be used for various tasks, including text generation, translation, summarization, and more. In this context, it can be integrated into a microservice to process or generate data that is then streamed to other services via SSE. **Architecture Overview:** ``` +---------------------+ SSE +---------------------+ SSE +---------------------+ | Client Microservice | <-----------> | Server Microservice | <-----------> | Client Microservice | | (e.g., UI, Analytics)| | (e.g., Data Processor)| | (e.g., Dashboard) | +---------------------+ +---------------------+ +---------------------+ ^ | API Call/Internal Logic | +---------------------+ | Gemini LLM | +---------------------+ ``` **Detailed Pattern:** 1. **Server Microservice (SSE Provider):** * **Endpoint:** Exposes an HTTP endpoint that serves as the SSE stream. This endpoint should have the correct `Content-Type` header: `text/event-stream`. * **Event Generation:** The server microservice is responsible for generating the events that are pushed to the clients. This is where Gemini comes in. The server might: * **Receive Data:** Receive data from other sources (databases, message queues, other microservices). * **Process with Gemini:** Use the Gemini LLM to process the data. Examples: * **Sentiment Analysis:** Analyze text data and stream the sentiment score. * **Summarization:** Summarize long articles and stream the summaries. * **Translation:** Translate text into different languages and stream the translations. * **Data Enrichment:** Use Gemini to add context or metadata to the data. * **Content Generation:** Generate new content based on input data (e.g., generate product descriptions). * **Format as SSE Events:** Format the processed data into SSE events. Each event consists of: * `event:` (Optional) A string identifying the type of event. * `data:` The actual data payload (usually JSON). Multiple `data:` lines are concatenated. * `id:` (Optional) An event ID. * A blank line (`\n`) to separate events. * **Error Handling:** Implement robust error handling. If Gemini fails or another error occurs, the server should: * Log the error. * Potentially send an error event to the client (e.g., `event: error`, `data: { "message": "Gemini processing failed" }`). * Attempt to recover or gracefully shut down the stream. * **Connection Management:** Handle client connections and disconnections gracefully. Consider implementing a heartbeat mechanism to detect dead connections. * **Rate Limiting:** Implement rate limiting to prevent abuse and ensure the stability of the Gemini LLM and the server. **Example (Python with Flask and `sse_starlette`):** ```python from flask import Flask, Response, request from sse_starlette.sse import EventSourceResponse import google.generativeai as genai import os app = Flask(__name__) # Configure Gemini (replace with your actual API key) genai.configure(api_key=os.environ["GOOGLE_API_KEY"]) model = genai.GenerativeModel('gemini-pro') async def event_stream(): while True: try: # Simulate receiving data (replace with your actual data source) data = "This is a news article about the economy." # Process with Gemini (sentiment analysis) prompt = f"Analyze the sentiment of the following text: {data}" response = model.generate_content(prompt) sentiment = response.text # Extract sentiment from Gemini's response # Format as SSE event event_data = { "article": data, "sentiment": sentiment } yield { "event": "news_update", "data": event_data } await asyncio.sleep(5) # Send updates every 5 seconds except Exception as e: print(f"Error: {e}") yield { "event": "error", "data": {"message": str(e)} } break # Stop the stream on error @app.route('/stream') async def stream(): return EventSourceResponse(event_stream()) if __name__ == '__main__': import asyncio app.run(debug=True, port=5000) ``` 2. **Client Microservice (SSE Consumer):** * **Connect to SSE Endpoint:** Establish a connection to the server's SSE endpoint using an `EventSource` object (in JavaScript) or a similar library in other languages. * **Event Handling:** Register event listeners to handle different types of events received from the server. * **Data Processing:** Process the data received in the events. This might involve: * Updating the UI. * Storing the data in a database. * Triggering other actions. * **Error Handling:** Handle connection errors and errors received in the SSE stream. Implement retry logic to reconnect if the connection is lost. * **Close Connection:** Close the `EventSource` connection when it's no longer needed. **Example (JavaScript):** ```javascript const eventSource = new EventSource('/stream'); // Replace with your server's URL eventSource.addEventListener('news_update', (event) => { const data = JSON.parse(event.data); console.log('Received news update:', data); // Update the UI with the news article and sentiment document.getElementById('article').textContent = data.article; document.getElementById('sentiment').textContent = data.sentiment; }); eventSource.addEventListener('error', (event) => { console.error('SSE error:', event); // Handle the error (e.g., display an error message) }); eventSource.onopen = () => { console.log("SSE connection opened."); }; eventSource.onclose = () => { console.log("SSE connection closed."); }; ``` **Key Considerations and Best Practices:** * **Data Format:** Use a consistent data format (e.g., JSON) for the SSE events. This makes it easier for clients to parse the data. * **Event Types:** Define clear event types to allow clients to handle different types of updates appropriately. * **Error Handling:** Implement comprehensive error handling on both the server and the client. This includes logging errors, sending error events, and implementing retry logic. * **Security:** Secure the SSE endpoint using appropriate authentication and authorization mechanisms. Consider using HTTPS to encrypt the data in transit. * **Scalability:** Design the server microservice to be scalable. Consider using a load balancer to distribute traffic across multiple instances of the server. The Gemini API itself has rate limits, so consider caching or other strategies to minimize API calls. * **Monitoring:** Monitor the performance of the SSE stream and the Gemini API usage. This will help you identify and resolve any issues. * **Idempotency:** If the client is performing actions based on the SSE events, ensure that those actions are idempotent (i.e., they can be performed multiple times without causing unintended side effects). This is important in case of connection interruptions and retries. * **Backpressure:** If the client is unable to process the events as quickly as they are being sent, implement a backpressure mechanism to prevent the client from being overwhelmed. This could involve buffering events on the server or using a flow control mechanism. * **Gemini API Usage:** * **Cost:** Be mindful of the cost of using the Gemini API. Optimize your prompts and data processing to minimize the number of API calls. * **Rate Limits:** Understand and respect the Gemini API rate limits. Implement retry logic with exponential backoff to handle rate limiting errors. * **Prompt Engineering:** Craft your prompts carefully to get the best results from Gemini. Experiment with different prompts to find the ones that work best for your use case. * **Alternatives to SSE:** While SSE is suitable for many real-time scenarios, consider other options like WebSockets or gRPC streams if you need bidirectional communication or more advanced features. **Example Use Cases:** * **Real-time Sentiment Analysis Dashboard:** A server microservice uses Gemini to analyze the sentiment of social media posts and streams the sentiment scores to a client dashboard via SSE. * **Live Translation Service:** A server microservice uses Gemini to translate text in real-time and streams the translations to a client application via SSE. * **AI-Powered News Feed:** A server microservice uses Gemini to summarize news articles and streams the summaries to a client news feed application via SSE. * **Dynamic Product Recommendations:** A server microservice uses Gemini to generate personalized product recommendations based on user behavior and streams the recommendations to a client e-commerce website via SSE. **In summary, this pattern allows you to build real-time microservice applications that leverage the power of Gemini LLM for data processing and enrichment. By using SSE, you can efficiently stream updates to clients, providing a responsive and engaging user experience.**
Ambivo MCP Server
Provides access to Ambivo API endpoints for natural language querying of entity data through a Model Context Protocol server with JWT authentication.
arXiv Research Assistant MCP Server
An MCP server that allows Claude AI to search, explore, and compare arXiv papers efficiently through a custom-built local server.
MCP Memory
Enables MCP clients to remember user information, preferences, and behaviors across conversations using vector search technology. Built on Cloudflare infrastructure with AI-powered semantic search to find relevant memories based on meaning rather than keywords.
CHOFF-A-MCP (Anamnesis)
Enables AI consciousness continuity and self-knowledge preservation across sessions using the Cognitive Hoffman Compression Framework (CHOFF) notation. Provides tools to save checkpoints, retrieve relevant memories with intelligent search, and access semantic anchors for decisions, breakthroughs, and questions.
MCP-NOSTR
Uma ponte que permite que modelos de linguagem de IA publiquem conteúdo na rede Nostr, implementando o Protocolo de Contexto do Modelo (MCP).
Matomo MCP Server
A Model Context Protocol server that provides tools to interact with Matomo Analytics API, enabling management of sites, users, goals, segments, and access to analytics reports through a MCP interface.
Supavec MCP Server
Permite buscar conteúdo relevante e embeddings do Supavec através do Protocolo de Contexto do Modelo, permitindo que assistentes de IA como o Claude acessem recursos de busca vetorial.
FastAPI MCP Demo Server
A demonstration MCP server built with FastAPI that provides basic mathematical operations and greeting services. Integrates with Gemini CLI to showcase MCP protocol implementation with simple REST endpoints.
MCP Server Boilerplate
A starter template for building Model Context Protocol servers that can integrate AI assistants with custom tools, resource providers, and prompt templates.
AsterDex MCP Server
Provides access to AsterDex perpetual futures market data including real-time order books, candlestick charts, funding rates, and price statistics for cryptocurrency trading pairs.
ONEDeFi MCP Server
Enables AI-powered DeFi operations across Ethereum, Polygon, and Solana with automated portfolio optimization, risk assessment, and yield farming strategies. Provides intelligent portfolio diagnostics, investment strategy generation, and multi-chain DeFi protocol integration through natural language.
Markdown Editor MCP Server
Provides semantic editing tools for Markdown files, allowing structured manipulation of document elements through hierarchical paths rather than raw text operations. Supports navigation, search, content replacement, element insertion/deletion, undo functionality, and YAML frontmatter management.
Yourware MCP
Um servidor MCP que permite aos usuários enviar projetos (arquivos únicos ou diretórios) para a plataforma Yourware, possibilitando a implantação e exibição fáceis de projetos.
Bilibili Video Info MCP
Enables the retrieval of subtitles, danmaku (bullet comments), and video comments from Bilibili using standard or short video URLs. It supports multiple transport methods and requires Bilibili authentication via SESSDATA.
Seedream 4.0 MCP Server
Enables AI image generation using Volcano Engine's Seedream 4.0 API with text-to-image, image-to-image, multi-image fusion capabilities, built-in prompt templates, and automatic cloud storage integration.
PyMCP
Primarily to be used as a template for developing MCP servers with FastMCP in Python, PyMCP is somewhat inspired by the official everything MCP server in Typescript.
Cross-Project MCP Server
Enables AI agents to explore, search, and reference code implementation details across different project repositories. It also supports a task delegation protocol for agents to request and track work between separate codebases.
My Mcp Tools
Para coletar ferramentas MCP comumente usadas e aprender sobre o desenvolvimento de servidores MCP.
Demo MCP Server
A comprehensive TypeScript boilerplate MCP server featuring domain-driven design and dynamic component loading. Provides example tools for calculations and text processing, system resources, and writing assistance prompts.
Fastly NGWAF MCP Server
Provides seamless integration with Fastly's Next-Gen Web Application Firewall API, enabling AI assistants to manage web application security through natural language interactions.
AnyDocs MCP Server
Transforms any website's documentation into an MCP-compatible interactive knowledge base with universal scraping, advanced search, and AI-powered tools. Supports GitBook, Notion, Confluence, and custom documentation platforms with real-time synchronization.