Discover Awesome MCP Servers
Extend your agent with 26,434 capabilities via MCP servers.
- All26,434
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Quip MCP Server
Servidor de Protocolo de Contexto de Modelo (MCP) para buscar documentos Quip.
Solana Docs Server
Okay, I understand. You want me to translate "solan docs context" into Portuguese. Here are a few options, depending on the specific nuance you want to convey: * **Contexto da documentação do Solana:** This is a direct and literal translation, suitable for most situations. It emphasizes the context *within* the Solana documentation. * **Contexto dos documentos do Solana:** This is very similar to the above, but uses "documentos" instead of "documentação." Both are acceptable. * **Contexto da documentação sobre Solana:** This emphasizes that the documentation *is about* Solana. * **Contexto nos documentos do Solana:** This emphasizes the context *within* the Solana documents. Without more context about *your* context, I'd recommend: **Contexto da documentação do Solana** This is the most common and generally applicable translation.
SharePoint Online MCP Server
Allows Claude to interact with SharePoint Online using the REST API, enabling users to query site information, lists, and list items through natural language commands.
Manticore Search MCP
Connect AI assistants to Manticore Search. Execute SQL queries, list tables, get schemas, and fetch documentation. Perfect for building RAG applications and search-powered AI agents.
Payload CMS MCP Server
Enables AI assistants to interact with Payload CMS instances through authenticated REST API operations. Supports creating, searching, and updating documents in collections with automatic JWT authentication and localization support.
mcp-humanpages
Enables AI agents to search for and hire humans for real-world tasks.
MiAI_MCP
Demonstração de construção e integração do Servidor MCP.
Polytomic MCP Server
This MCP server enables interaction with the Polytomic API to manage data connections, models, and synchronization tasks. It supports both Reverse ETL and ELT operations, allowing users to trigger syncs, monitor execution status, and configure data pipelines.
Jira MCP Server
An MCP server that enables communication with Jira, allowing users to perform operations like getting, searching, creating, and editing issues through natural language interaction.
Law Scrapper MCP
Enables AI-powered legal research and analysis of Polish legal acts from the Sejm API. Provides comprehensive search, document retrieval, metadata analysis, and content access for legal documents from Dziennik Ustaw and Monitor Polski.
Tabby-MCP-Server
EVM MCP Server
Jokes MCP Server
An MCP server that delivers jokes on demand, supporting different joke categories like Chuck Norris and Dad jokes, which can be integrated with Microsoft Copilot Studio.
Missive MCP Server
An MCP server that integrates with Missive App allowing users to create, manage, and send email drafts through AI tools like Claude Desktop or N8N.
domain-lookup-mcp
Um mini servidor MCP para pesquisa de nomes de domínio.
Tavily Web Search MCP Server
Enables web search capabilities through the Tavily API, allowing users to search the internet for information using natural language queries. Serves as a demonstration and educational project for building MCP servers with external API integrations.
Miro MCP
Servidor Miro MCP, expondo todas as funcionalidades disponíveis no SDK oficial do Miro.
YaVendió Tools
An MCP-based messaging system that allows AI systems to interact with various messaging platforms through standardized tools for sending text, images, documents, buttons, and alerts.
A Model Context Protocol Server for the UK Science Museum Group API
Servidor MCP para o Grupo do Museu de Ciência do Reino Unido
Model Context Protocol (MCP)
Okay, here's a breakdown of a working pattern for SSE (Server-Sent Events) based MCP (Microservice Communication Protocol) clients and servers, leveraging the Gemini LLM (Large Language Model). This pattern focuses on how to use SSE for real-time communication between microservices, with Gemini potentially playing a role in data transformation, enrichment, or decision-making within the microservice architecture. **Core Concepts:** * **Microservices:** A distributed application architecture composed of small, independent, and loosely coupled services. * **MCP (Microservice Communication Protocol):** A standardized way for microservices to communicate. This could be a custom protocol or a well-established one like gRPC, REST, or, in this case, SSE. The key is consistency and clarity. * **SSE (Server-Sent Events):** A unidirectional communication protocol where the server pushes updates to the client over a single HTTP connection. It's ideal for real-time data streams. * **Gemini LLM:** A powerful language model that can be used for various tasks, including text generation, translation, summarization, and more. In this context, it can be integrated into a microservice to process or generate data that is then streamed to other services via SSE. **Architecture Overview:** ``` +---------------------+ SSE +---------------------+ SSE +---------------------+ | Client Microservice | <-----------> | Server Microservice | <-----------> | Client Microservice | | (e.g., UI, Analytics)| | (e.g., Data Processor)| | (e.g., Dashboard) | +---------------------+ +---------------------+ +---------------------+ ^ | API Call/Internal Logic | +---------------------+ | Gemini LLM | +---------------------+ ``` **Detailed Pattern:** 1. **Server Microservice (SSE Provider):** * **Endpoint:** Exposes an HTTP endpoint that serves as the SSE stream. This endpoint should have the correct `Content-Type` header: `text/event-stream`. * **Event Generation:** The server microservice is responsible for generating the events that are pushed to the clients. This is where Gemini comes in. The server might: * **Receive Data:** Receive data from other sources (databases, message queues, other microservices). * **Process with Gemini:** Use the Gemini LLM to process the data. Examples: * **Sentiment Analysis:** Analyze text data and stream the sentiment score. * **Summarization:** Summarize long articles and stream the summaries. * **Translation:** Translate text into different languages and stream the translations. * **Data Enrichment:** Use Gemini to add context or metadata to the data. * **Content Generation:** Generate new content based on input data (e.g., generate product descriptions). * **Format as SSE Events:** Format the processed data into SSE events. Each event consists of: * `event:` (Optional) A string identifying the type of event. * `data:` The actual data payload (usually JSON). Multiple `data:` lines are concatenated. * `id:` (Optional) An event ID. * A blank line (`\n`) to separate events. * **Error Handling:** Implement robust error handling. If Gemini fails or another error occurs, the server should: * Log the error. * Potentially send an error event to the client (e.g., `event: error`, `data: { "message": "Gemini processing failed" }`). * Attempt to recover or gracefully shut down the stream. * **Connection Management:** Handle client connections and disconnections gracefully. Consider implementing a heartbeat mechanism to detect dead connections. * **Rate Limiting:** Implement rate limiting to prevent abuse and ensure the stability of the Gemini LLM and the server. **Example (Python with Flask and `sse_starlette`):** ```python from flask import Flask, Response, request from sse_starlette.sse import EventSourceResponse import google.generativeai as genai import os app = Flask(__name__) # Configure Gemini (replace with your actual API key) genai.configure(api_key=os.environ["GOOGLE_API_KEY"]) model = genai.GenerativeModel('gemini-pro') async def event_stream(): while True: try: # Simulate receiving data (replace with your actual data source) data = "This is a news article about the economy." # Process with Gemini (sentiment analysis) prompt = f"Analyze the sentiment of the following text: {data}" response = model.generate_content(prompt) sentiment = response.text # Extract sentiment from Gemini's response # Format as SSE event event_data = { "article": data, "sentiment": sentiment } yield { "event": "news_update", "data": event_data } await asyncio.sleep(5) # Send updates every 5 seconds except Exception as e: print(f"Error: {e}") yield { "event": "error", "data": {"message": str(e)} } break # Stop the stream on error @app.route('/stream') async def stream(): return EventSourceResponse(event_stream()) if __name__ == '__main__': import asyncio app.run(debug=True, port=5000) ``` 2. **Client Microservice (SSE Consumer):** * **Connect to SSE Endpoint:** Establish a connection to the server's SSE endpoint using an `EventSource` object (in JavaScript) or a similar library in other languages. * **Event Handling:** Register event listeners to handle different types of events received from the server. * **Data Processing:** Process the data received in the events. This might involve: * Updating the UI. * Storing the data in a database. * Triggering other actions. * **Error Handling:** Handle connection errors and errors received in the SSE stream. Implement retry logic to reconnect if the connection is lost. * **Close Connection:** Close the `EventSource` connection when it's no longer needed. **Example (JavaScript):** ```javascript const eventSource = new EventSource('/stream'); // Replace with your server's URL eventSource.addEventListener('news_update', (event) => { const data = JSON.parse(event.data); console.log('Received news update:', data); // Update the UI with the news article and sentiment document.getElementById('article').textContent = data.article; document.getElementById('sentiment').textContent = data.sentiment; }); eventSource.addEventListener('error', (event) => { console.error('SSE error:', event); // Handle the error (e.g., display an error message) }); eventSource.onopen = () => { console.log("SSE connection opened."); }; eventSource.onclose = () => { console.log("SSE connection closed."); }; ``` **Key Considerations and Best Practices:** * **Data Format:** Use a consistent data format (e.g., JSON) for the SSE events. This makes it easier for clients to parse the data. * **Event Types:** Define clear event types to allow clients to handle different types of updates appropriately. * **Error Handling:** Implement comprehensive error handling on both the server and the client. This includes logging errors, sending error events, and implementing retry logic. * **Security:** Secure the SSE endpoint using appropriate authentication and authorization mechanisms. Consider using HTTPS to encrypt the data in transit. * **Scalability:** Design the server microservice to be scalable. Consider using a load balancer to distribute traffic across multiple instances of the server. The Gemini API itself has rate limits, so consider caching or other strategies to minimize API calls. * **Monitoring:** Monitor the performance of the SSE stream and the Gemini API usage. This will help you identify and resolve any issues. * **Idempotency:** If the client is performing actions based on the SSE events, ensure that those actions are idempotent (i.e., they can be performed multiple times without causing unintended side effects). This is important in case of connection interruptions and retries. * **Backpressure:** If the client is unable to process the events as quickly as they are being sent, implement a backpressure mechanism to prevent the client from being overwhelmed. This could involve buffering events on the server or using a flow control mechanism. * **Gemini API Usage:** * **Cost:** Be mindful of the cost of using the Gemini API. Optimize your prompts and data processing to minimize the number of API calls. * **Rate Limits:** Understand and respect the Gemini API rate limits. Implement retry logic with exponential backoff to handle rate limiting errors. * **Prompt Engineering:** Craft your prompts carefully to get the best results from Gemini. Experiment with different prompts to find the ones that work best for your use case. * **Alternatives to SSE:** While SSE is suitable for many real-time scenarios, consider other options like WebSockets or gRPC streams if you need bidirectional communication or more advanced features. **Example Use Cases:** * **Real-time Sentiment Analysis Dashboard:** A server microservice uses Gemini to analyze the sentiment of social media posts and streams the sentiment scores to a client dashboard via SSE. * **Live Translation Service:** A server microservice uses Gemini to translate text in real-time and streams the translations to a client application via SSE. * **AI-Powered News Feed:** A server microservice uses Gemini to summarize news articles and streams the summaries to a client news feed application via SSE. * **Dynamic Product Recommendations:** A server microservice uses Gemini to generate personalized product recommendations based on user behavior and streams the recommendations to a client e-commerce website via SSE. **In summary, this pattern allows you to build real-time microservice applications that leverage the power of Gemini LLM for data processing and enrichment. By using SSE, you can efficiently stream updates to clients, providing a responsive and engaging user experience.**
My Mcp Tools
Para coletar ferramentas MCP comumente usadas e aprender sobre o desenvolvimento de servidores MCP.
Nager MCP v201 MCP Server
Provides access to the Nager Public Holiday API, enabling AI agents to retrieve holiday data and interact with global date-related endpoints. It leverages the Model Context Protocol to provide standardized tools for holiday information and date calculations.
MCP Microsoft Office
Connects AI assistants to Microsoft 365 accounts to manage emails, calendars, files, and Teams messages. It offers 71 tools and supports multi-user environments through a secure, customizable server architecture.
UX/UI Tools for React + Material-UI
Provides AI-powered tools to apply UX/UI best practices, Nielsen's heuristics, cognitive biases, and Material-UI patterns to React components. Enables automated application of responsive design, Apple design patterns, and complete UX guidelines through natural language commands.
Meraki MCP
Provides dynamic access to 400+ Meraki Dashboard API endpoints with intelligent discovery, plus advanced compliance auditing (PCI DSS, HIPAA, SOC2, ISO 27001, NIST), backup/restore operations, and network security analysis. Enables natural language queries to manage Meraki networks without pre-defined tools.
irs-taxpayer-mcp
39 tax tools for US individual taxpayers — federal/state tax calculations, credits, deductions, retirement strategies, audit risk, and tax planning. All calculations run locally, no data leaves the machine. Supports TY2024 and TY2025 (One Big Beautiful Bill Act).
Mcp Server
Servidor MCP Python
HackerNews Job Scraper
Extracts and manages job postings from HackerNews 'Who's Hiring' threads, enabling users to search and analyze listings through Claude Desktop. It provides tools for keyword-based job searches and detailed post retrieval while utilizing a file-based caching system.
Fetch as Markdown
Fetches web pages and converts them to clean, readable markdown format by extracting main content while removing navigation, ads, and other non-essential elements to minimize token usage.
Video Metadata MCP Server
Enables management of sports video metadata with PostgreSQL backend, supporting CRUD operations, advanced search, and tracking of game information including teams, scores, player statistics, and match details.