Discover Awesome MCP Servers

Extend your agent with 15,370 capabilities via MCP servers.

All15,370
MCP Thought Server

MCP Thought Server

Um servidor poderoso que fornece ferramentas avançadas de raciocínio através do Protocolo de Contexto de Modelo (MCP) para aprimorar as capacidades de raciocínio, planejamento e refinamento iterativo de agentes de IA como Cline.

WorkOS MCP Server

WorkOS MCP Server

A lightweight MCP server that enables Cursor Agents to interact with the WorkOS API, providing access to WorkOS functionality directly from within the editor.

InfluxDB Natural Language Query MCP Server

InfluxDB Natural Language Query MCP Server

Enables users to generate and execute InfluxDB queries using natural language commands in Korean. Supports querying metrics like CPU usage, memory status, and system monitoring data through conversational interface.

MCP Knowledge Base Server

MCP Knowledge Base Server

A local document processing server that can index various document formats (PDF, DOCX, TXT, HTML) and answer questions based on their content using the Model Context Protocol.

MCP Context Provider

MCP Context Provider

Provides persistent tool context that survives across Claude Desktop chat sessions, automatically injecting tool-specific rules, syntax preferences, and best practices. Eliminates the need to re-establish context in each new conversation.

Google Directory MCP Server by CData

Google Directory MCP Server by CData

This read-only MCP Server allows you to connect to Google Directory data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

browser-use MCP Server

browser-use MCP Server

Servidor MCP que permite que LLMs executem tarefas no navegador via transporte SSE, permitindo que clientes como Cursor.ai e Claude abram sites e interajam com conteúdo da web por meio de comandos em linguagem natural.

BurpMCP

BurpMCP

Extensão do Burp Suite com Servidor MCP

MySQL MCP 서버

MySQL MCP 서버

Servidor de Protocolo de Contexto de Modelo para bancos de dados MySQL no Cursor IDE

mcp-server-circleci

mcp-server-circleci

Este servidor MCP permite que você use o Cursor IDE, ou qualquer agente habilitado para MCP Client, para usar linguagem natural para realizar tarefas com o CircleCI, por exemplo: Encontre o último pipeline com falha no meu branch e obtenha os logs.

Okta MCP Server by CData

Okta MCP Server by CData

This project builds a read-only MCP server. For full read, write, update, delete, and action capabilities and a simplified setup, check out our free CData MCP Server for Okta (beta): https://www.cdata.com/download/download.aspx?sku=KUZK-V&type=beta

Backlog MCP Server

Backlog MCP Server

A Model Context Protocol server that enables Claude to interact with Backlog project management tools through API integration, allowing management of projects, issues, wiki pages and other Backlog resources.

Xcode Diagnostics MCP Plugin

Xcode Diagnostics MCP Plugin

Conecta-se ao sistema de compilação do Xcode para extrair, analisar e exibir erros e avisos dos seus projetos Swift, ajudando assistentes de IA a identificar rapidamente problemas de código sem precisar procurar manualmente nos logs de compilação.

A1D MCP Server

A1D MCP Server

A universal AI server that provides image and video processing tools (background removal, upscaling, vectorization, etc.) for any MCP-compatible client with simple setup.

mcp-ytTranscript

mcp-ytTranscript

Okay, here's a conceptual outline and code snippets for a simple MCP (presumably meaning Minimal Complete and Verifiable) server in Python that transcribes YouTube videos, along with explanations and considerations for Portuguese: **Conceptual Outline** 1. **Server Framework:** Use a lightweight framework like Flask or FastAPI to create a simple API endpoint. 2. **YouTube Video Download:** Use `yt-dlp` (a fork of `youtube-dl`) to download the video's audio. `yt-dlp` is generally preferred as it's actively maintained. 3. **Audio Transcription:** Use a speech-to-text library like `Whisper` (from OpenAI) or `SpeechRecognition` (which can use various APIs like Google Cloud Speech-to-Text). Whisper is often preferred for its quality and ability to run locally. 4. **Language Handling:** Specify the desired language for transcription. 5. **Error Handling:** Handle potential errors (invalid URLs, download failures, transcription errors). 6. **Return Transcription:** Return the transcription as a JSON response. **Code Snippets (Python with Flask and Whisper)** ```python from flask import Flask, request, jsonify import yt_dlp import whisper import os import tempfile app = Flask(__name__) # Load the Whisper model (choose a size based on your needs and resources) model = whisper.load_model("base") # Options: tiny, base, small, medium, large def transcribe_youtube_video(youtube_url, language="en"): """ Downloads audio from a YouTube video and transcribes it using Whisper. Args: youtube_url: The URL of the YouTube video. language: The desired language for transcription (e.g., "en" for English, "pt" for Portuguese). Returns: The transcription as a string, or None if an error occurred. """ try: # 1. Download Audio using yt-dlp ydl_opts = { 'format': 'bestaudio/best', 'extractaudio': True, 'audioformat': 'mp3', 'outtmpl': '%(id)s.%(ext)s', # Save audio as video_id.mp3 'noplaylist': True, } with yt_dlp.YoutubeDL(ydl_opts) as ydl: info_dict = ydl.extract_info(youtube_url, download=True) video_id = info_dict.get('id', None) audio_file = f"{video_id}.mp3" # 2. Transcribe Audio using Whisper result = model.transcribe(audio_file, language=language) transcription = result["text"] # 3. Clean up the audio file os.remove(audio_file) return transcription except Exception as e: print(f"Error: {e}") return None @app.route('/transcribe', methods=['POST']) def transcribe_endpoint(): """ API endpoint to transcribe a YouTube video. """ data = request.get_json() youtube_url = data.get('url') language = data.get('language', 'en') # Default to English if not youtube_url: return jsonify({'error': 'Missing YouTube URL'}), 400 transcription = transcribe_youtube_video(youtube_url, language) if transcription: return jsonify({'transcription': transcription}) else: return jsonify({'error': 'Transcription failed'}), 500 if __name__ == '__main__': app.run(debug=True) # Don't use debug=True in production! ``` **How to Run:** 1. **Install Dependencies:** ```bash pip install Flask yt-dlp openai-whisper ``` 2. **Set OpenAI API Key (if needed):** If you're using the OpenAI Whisper API directly (less common now that the `whisper` library provides local models), you'll need to set your API key as an environment variable: ```bash export OPENAI_API_KEY="YOUR_OPENAI_API_KEY" ``` 3. **Run the Script:** ```bash python your_script_name.py ``` 4. **Send a Request:** Use `curl`, `Postman`, or similar to send a POST request to `http://127.0.0.1:5000/transcribe` with a JSON payload: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "YOUR_YOUTUBE_URL", "language": "pt"}' http://127.0.0.1:5000/transcribe ``` **Explanation and Portuguese Considerations** * **`yt-dlp`:** Downloads the audio from the YouTube video. It's crucial for getting the audio data. * **`whisper`:** Performs the speech-to-text transcription. The `language` parameter is key for getting accurate results in Portuguese. The model size ("base" in the example) affects accuracy and resource usage. Larger models are generally more accurate but require more memory and processing power. Experiment to find the best balance for your needs. * **Language Code:** Use `"pt"` for Portuguese. Whisper supports many languages. * **Flask:** Provides the web server functionality to receive requests and send responses. * **Error Handling:** The `try...except` block is important to catch potential errors during the download or transcription process. More robust error handling might involve logging errors to a file. * **Temporary Files:** The audio file is saved temporarily and then deleted. This prevents your server from filling up with audio files. Consider using `tempfile.NamedTemporaryFile` for more secure temporary file handling. * **API Endpoint:** The `/transcribe` endpoint receives the YouTube URL and language code in a JSON payload. * **JSON Response:** The transcription is returned as a JSON response, making it easy to parse by other applications. **Important Notes and Improvements** * **Model Size:** The `whisper.load_model()` line is critical. Choose the appropriate model size based on your hardware and accuracy requirements. "tiny" is the fastest but least accurate. "large" is the most accurate but requires significant resources. * **Resource Usage:** Transcription can be resource-intensive. Consider using a more robust server setup (e.g., using a WSGI server like Gunicorn or uWSGI) if you expect a high volume of requests. * **Asynchronous Processing:** For better performance, especially with longer videos, consider using asynchronous task queues (like Celery or Redis Queue) to offload the transcription process to a background worker. This will prevent the API from blocking while the transcription is running. * **Rate Limiting:** Implement rate limiting to prevent abuse of your API. * **Security:** In a production environment, you'll need to consider security best practices, such as input validation, authentication, and authorization. * **Portuguese Dialects:** If you need to support specific Portuguese dialects (e.g., Brazilian Portuguese), you might need to fine-tune the Whisper model or use a different speech-to-text engine that is specifically trained on that dialect. Whisper generally handles different dialects reasonably well, but fine-tuning can improve accuracy. * **Subtitles/Timestamps:** Whisper can also generate subtitles with timestamps. If you need subtitles, you can modify the code to extract the subtitle information from the Whisper result. * **Google Cloud Speech-to-Text:** If you need very high accuracy, especially for specific domains or accents, consider using Google Cloud Speech-to-Text. It's a paid service, but it often provides better results than open-source alternatives. You would need to install the `google-cloud-speech` library and authenticate with Google Cloud. The `SpeechRecognition` library can be used to interface with Google Cloud Speech-to-Text. This comprehensive response provides a solid foundation for building your YouTube transcription server. Remember to adapt the code and configuration to your specific needs and environment. Good luck!

MCP-researcher Server

MCP-researcher Server

Um assistente de pesquisa poderoso que se integra com Cline e Claude Desktop para aproveitar o Perplexity AI para busca inteligente, recuperação de documentação, descoberta de API e assistência na modernização de código durante a programação.

Bluesky MCP (Model Context Protocol)

Bluesky MCP (Model Context Protocol)

Bluesky MCP é um servidor baseado em Go para a rede social Bluesky, oferecendo recursos alimentados por IA através de endpoints de API JSON-RPC 2.0. Ele suporta configuração flexível e segue as melhores práticas da indústria para segurança, desempenho e tratamento de erros.

AI Video Generator MCP Server

AI Video Generator MCP Server

Servidor de Protocolo de Contexto de Modelo que permite gerar vídeos a partir de prompts de texto e/ou imagens usando modelos de IA (Luma Ray2 Flash e Kling v1.6 Pro) com parâmetros configuráveis como proporção, resolução e duração.

SMCPHUB Server

SMCPHUB Server

A client library for connecting to and interacting with MCP servers hosted on smcphub.com, allowing users to find, publish, and call tools on MCP servers.

Remote MCP Server Authless

Remote MCP Server Authless

A deployable Model Context Protocol server on Cloudflare Workers that doesn't require authentication, allowing tools to be added and used from Cloudflare AI Playground or Claude Desktop.

Ethora MCP Server

Ethora MCP Server

Enables integration with the Ethora platform through user authentication, registration, and application management operations. Supports creating, updating, deleting, and listing applications within the Ethora service.

MCP Background Task Server

MCP Background Task Server

A Model Context Protocol server that enables running and managing long-running background tasks (like development servers, builds) from within Claude Desktop or other MCP-compatible clients.

Algorand MCP Server

Algorand MCP Server

Enables interaction with the Algorand blockchain network including account management, payments, asset creation and transfers, along with general utility tools. Provides secure mnemonic encryption and supports both testnet and mainnet environments.

Puppeteer MCP Server

Puppeteer MCP Server

Permite que LLMs realizem tarefas de navegação na web, capturem screenshots e executem JavaScript usando o Puppeteer para automação do navegador.

MCP LLMS-TXT Documentation Server

MCP LLMS-TXT Documentation Server

Um servidor MCP que fornece ferramentas para carregar e buscar documentação de qualquer fonte llms.txt, dando aos usuários controle total sobre a recuperação de contexto para LLMs em agentes e aplicações IDE.

Binary Ninja MCP

Binary Ninja MCP

Um servidor que permite a integração perfeita das capacidades de engenharia reversa do Binary Ninja com a assistência de LLMs (Modelos de Linguagem Grandes), permitindo que ferramentas de IA como o Claude interajam com os recursos de análise binária em tempo real.

GoHighLevel MCP Server

GoHighLevel MCP Server

Enables AI assistants to interact with GoHighLevel's complete API including contacts, opportunities, calendars, workflows, communications, and business management tools. Supports both Bearer token and OAuth2 authentication with automatic token management.

mcp-server

mcp-server

Semantic Scholar MCP Server

Semantic Scholar MCP Server

Enables access to the Semantic Scholar Academic Graph API for searching and retrieving detailed information about academic papers, authors, citations, and references.

Unifuncs

Unifuncs