Discover Awesome MCP Servers
Extend your agent with 14,392 capabilities via MCP servers.
- All14,392
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Top MCP Servers
Uma coleção selecionada dos principais servidores de Protocolo de Contexto de Modelo (MCP) para aprimorar fluxos de trabalho de desenvolvimento em 2025.
MCP Server for Kubernetes Support Bundles
MCP Subfinder Server
Servidor Model Context Protocol (MCP) que envolve a ferramenta subfinder da ProjectDiscovery para uma enumeração de subdomínios poderosa através de uma API JSON-RPC.

MCP Node Time
A MCP server that provides timezone-aware date and time operations. This server addresses the common issue where AI assistants provide incorrect date information due to timezone confusion.

mcp-ytTranscript
Okay, here's a conceptual outline and code snippets for a simple MCP (presumably meaning Minimal Complete and Verifiable) server in Python that transcribes YouTube videos, along with explanations and considerations for Portuguese: **Conceptual Outline** 1. **Server Framework:** Use a lightweight framework like Flask or FastAPI to create a simple API endpoint. 2. **YouTube Video Download:** Use `yt-dlp` (a fork of `youtube-dl`) to download the video's audio. `yt-dlp` is generally preferred as it's actively maintained. 3. **Audio Transcription:** Use a speech-to-text library like `Whisper` (from OpenAI) or `SpeechRecognition` (which can use various APIs like Google Cloud Speech-to-Text). Whisper is often preferred for its quality and ability to run locally. 4. **Language Handling:** Specify the desired language for transcription. 5. **Error Handling:** Handle potential errors (invalid URLs, download failures, transcription errors). 6. **Return Transcription:** Return the transcription as a JSON response. **Code Snippets (Python with Flask and Whisper)** ```python from flask import Flask, request, jsonify import yt_dlp import whisper import os import tempfile app = Flask(__name__) # Load the Whisper model (choose a size based on your needs and resources) model = whisper.load_model("base") # Options: tiny, base, small, medium, large def transcribe_youtube_video(youtube_url, language="en"): """ Downloads audio from a YouTube video and transcribes it using Whisper. Args: youtube_url: The URL of the YouTube video. language: The desired language for transcription (e.g., "en" for English, "pt" for Portuguese). Returns: The transcription as a string, or None if an error occurred. """ try: # 1. Download Audio using yt-dlp ydl_opts = { 'format': 'bestaudio/best', 'extractaudio': True, 'audioformat': 'mp3', 'outtmpl': '%(id)s.%(ext)s', # Save audio as video_id.mp3 'noplaylist': True, } with yt_dlp.YoutubeDL(ydl_opts) as ydl: info_dict = ydl.extract_info(youtube_url, download=True) video_id = info_dict.get('id', None) audio_file = f"{video_id}.mp3" # 2. Transcribe Audio using Whisper result = model.transcribe(audio_file, language=language) transcription = result["text"] # 3. Clean up the audio file os.remove(audio_file) return transcription except Exception as e: print(f"Error: {e}") return None @app.route('/transcribe', methods=['POST']) def transcribe_endpoint(): """ API endpoint to transcribe a YouTube video. """ data = request.get_json() youtube_url = data.get('url') language = data.get('language', 'en') # Default to English if not youtube_url: return jsonify({'error': 'Missing YouTube URL'}), 400 transcription = transcribe_youtube_video(youtube_url, language) if transcription: return jsonify({'transcription': transcription}) else: return jsonify({'error': 'Transcription failed'}), 500 if __name__ == '__main__': app.run(debug=True) # Don't use debug=True in production! ``` **How to Run:** 1. **Install Dependencies:** ```bash pip install Flask yt-dlp openai-whisper ``` 2. **Set OpenAI API Key (if needed):** If you're using the OpenAI Whisper API directly (less common now that the `whisper` library provides local models), you'll need to set your API key as an environment variable: ```bash export OPENAI_API_KEY="YOUR_OPENAI_API_KEY" ``` 3. **Run the Script:** ```bash python your_script_name.py ``` 4. **Send a Request:** Use `curl`, `Postman`, or similar to send a POST request to `http://127.0.0.1:5000/transcribe` with a JSON payload: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "YOUR_YOUTUBE_URL", "language": "pt"}' http://127.0.0.1:5000/transcribe ``` **Explanation and Portuguese Considerations** * **`yt-dlp`:** Downloads the audio from the YouTube video. It's crucial for getting the audio data. * **`whisper`:** Performs the speech-to-text transcription. The `language` parameter is key for getting accurate results in Portuguese. The model size ("base" in the example) affects accuracy and resource usage. Larger models are generally more accurate but require more memory and processing power. Experiment to find the best balance for your needs. * **Language Code:** Use `"pt"` for Portuguese. Whisper supports many languages. * **Flask:** Provides the web server functionality to receive requests and send responses. * **Error Handling:** The `try...except` block is important to catch potential errors during the download or transcription process. More robust error handling might involve logging errors to a file. * **Temporary Files:** The audio file is saved temporarily and then deleted. This prevents your server from filling up with audio files. Consider using `tempfile.NamedTemporaryFile` for more secure temporary file handling. * **API Endpoint:** The `/transcribe` endpoint receives the YouTube URL and language code in a JSON payload. * **JSON Response:** The transcription is returned as a JSON response, making it easy to parse by other applications. **Important Notes and Improvements** * **Model Size:** The `whisper.load_model()` line is critical. Choose the appropriate model size based on your hardware and accuracy requirements. "tiny" is the fastest but least accurate. "large" is the most accurate but requires significant resources. * **Resource Usage:** Transcription can be resource-intensive. Consider using a more robust server setup (e.g., using a WSGI server like Gunicorn or uWSGI) if you expect a high volume of requests. * **Asynchronous Processing:** For better performance, especially with longer videos, consider using asynchronous task queues (like Celery or Redis Queue) to offload the transcription process to a background worker. This will prevent the API from blocking while the transcription is running. * **Rate Limiting:** Implement rate limiting to prevent abuse of your API. * **Security:** In a production environment, you'll need to consider security best practices, such as input validation, authentication, and authorization. * **Portuguese Dialects:** If you need to support specific Portuguese dialects (e.g., Brazilian Portuguese), you might need to fine-tune the Whisper model or use a different speech-to-text engine that is specifically trained on that dialect. Whisper generally handles different dialects reasonably well, but fine-tuning can improve accuracy. * **Subtitles/Timestamps:** Whisper can also generate subtitles with timestamps. If you need subtitles, you can modify the code to extract the subtitle information from the Whisper result. * **Google Cloud Speech-to-Text:** If you need very high accuracy, especially for specific domains or accents, consider using Google Cloud Speech-to-Text. It's a paid service, but it often provides better results than open-source alternatives. You would need to install the `google-cloud-speech` library and authenticate with Google Cloud. The `SpeechRecognition` library can be used to interface with Google Cloud Speech-to-Text. This comprehensive response provides a solid foundation for building your YouTube transcription server. Remember to adapt the code and configuration to your specific needs and environment. Good luck!

MCP-researcher Server
Um assistente de pesquisa poderoso que se integra com Cline e Claude Desktop para aproveitar o Perplexity AI para busca inteligente, recuperação de documentação, descoberta de API e assistência na modernização de código durante a programação.
Bluesky MCP (Model Context Protocol)
Bluesky MCP é um servidor baseado em Go para a rede social Bluesky, oferecendo recursos alimentados por IA através de endpoints de API JSON-RPC 2.0. Ele suporta configuração flexível e segue as melhores práticas da indústria para segurança, desempenho e tratamento de erros.

AI Video Generator MCP Server
Servidor de Protocolo de Contexto de Modelo que permite gerar vídeos a partir de prompts de texto e/ou imagens usando modelos de IA (Luma Ray2 Flash e Kling v1.6 Pro) com parâmetros configuráveis como proporção, resolução e duração.

SMCPHUB Server
A client library for connecting to and interacting with MCP servers hosted on smcphub.com, allowing users to find, publish, and call tools on MCP servers.

Remote MCP Server Authless
A deployable Model Context Protocol server on Cloudflare Workers that doesn't require authentication, allowing tools to be added and used from Cloudflare AI Playground or Claude Desktop.

mcp-server-circleci
Este servidor MCP permite que você use o Cursor IDE, ou qualquer agente habilitado para MCP Client, para usar linguagem natural para realizar tarefas com o CircleCI, por exemplo: Encontre o último pipeline com falha no meu branch e obtenha os logs.

Ethora MCP Server
Enables integration with the Ethora platform through user authentication, registration, and application management operations. Supports creating, updating, deleting, and listing applications within the Ethora service.

Algorand MCP Server
Enables interaction with the Algorand blockchain network including account management, payments, asset creation and transfers, along with general utility tools. Provides secure mnemonic encryption and supports both testnet and mainnet environments.

Puppeteer MCP Server
Permite que LLMs realizem tarefas de navegação na web, capturem screenshots e executem JavaScript usando o Puppeteer para automação do navegador.

Binary Ninja MCP
Um servidor que permite a integração perfeita das capacidades de engenharia reversa do Binary Ninja com a assistência de LLMs (Modelos de Linguagem Grandes), permitindo que ferramentas de IA como o Claude interajam com os recursos de análise binária em tempo real.
mcp-server

MCP Waifu Queue
Um servidor MCP que implementa uma personagem de IA conversacional "waifu" usando um serviço de geração de texto com enfileiramento Redis e aceleração por GPU.
MLflow Prompt Registry MCP Server
Espelho de

mcp-jira-stdio
MCP server for Jira integration with stdio transport. Enables reading, writing, and managing Jira issues and projects directly from Claude Desktop. Supports issue creation, updates, comments, JQL search, and project management.

Asset Price MCP Server
Um servidor que fornece ferramentas para obter informações de preços em tempo real para vários ativos, incluindo metais preciosos e criptomoedas, permitindo que modelos de linguagem acessem e exibam dados de preços de ativos atuais.

MCP Background Task Server
A Model Context Protocol server that enables running and managing long-running background tasks (like development servers, builds) from within Claude Desktop or other MCP-compatible clients.

MCP LLMS-TXT Documentation Server
Um servidor MCP que fornece ferramentas para carregar e buscar documentação de qualquer fonte llms.txt, dando aos usuários controle total sobre a recuperação de contexto para LLMs em agentes e aplicações IDE.

MRP Calculator MCP Server
Fornece ferramentas de Planejamento das Necessidades de Materiais (MRP) para calcular cronogramas de entrega, determinar necessidades de pedidos e realizar cálculos de período de MRP com base nos níveis de estoque, previsões e restrições de pedidos.
Tavily Search
API de busca Tavily AI

AI Sticky Notes
A Python-based MCP server that allows users to create, read, and manage digital sticky notes with Claude integration for AI-powered note summarization.

Dev Memory Server
Servidor de memória de desenvolvimento persistente que captura e organiza automaticamente o contexto de desenvolvimento, as alterações de código e as interações do usuário em todos os projetos.
Tigris MCP Server

Logseq MCP Tools
Um servidor de Protocolo de Contexto de Modelo que permite que agentes de IA interajam com grafos de conhecimento Logseq locais, suportando operações como criar/editar páginas e blocos, pesquisar conteúdo e gerenciar entradas de diário.

baidu-ai-search
I am sorry, I do not have the capability to directly access the internet or use specific search engines like Baidu. I am a language model, not a web browser. Therefore, I cannot perform web searches for you.
GitHub Triage MCP
O GitHub Triage MCP auxilia no gerenciamento e na automação de fluxos de trabalho de triagem dentro de repositórios do GitHub.