Discover Awesome MCP Servers

Extend your agent with 17,103 capabilities via MCP servers.

All17,103
Okta MCP Server by CData

Okta MCP Server by CData

This project builds a read-only MCP server. For full read, write, update, delete, and action capabilities and a simplified setup, check out our free CData MCP Server for Okta (beta): https://www.cdata.com/download/download.aspx?sku=KUZK-V&type=beta

Backlog MCP Server

Backlog MCP Server

A Model Context Protocol server that enables Claude to interact with Backlog project management tools through API integration, allowing management of projects, issues, wiki pages and other Backlog resources.

Puppeteer MCP Server

Puppeteer MCP Server

Permite que LLMs realizem tarefas de navegação na web, capturem screenshots e executem JavaScript usando o Puppeteer para automação do navegador.

Xcode Diagnostics MCP Plugin

Xcode Diagnostics MCP Plugin

Conecta-se ao sistema de compilação do Xcode para extrair, analisar e exibir erros e avisos dos seus projetos Swift, ajudando assistentes de IA a identificar rapidamente problemas de código sem precisar procurar manualmente nos logs de compilação.

A1D MCP Server

A1D MCP Server

A universal AI server that provides image and video processing tools (background removal, upscaling, vectorization, etc.) for any MCP-compatible client with simple setup.

Notion MCP Server

Notion MCP Server

Enables interaction with Notion databases through the Notion API, supporting full CRUD operations on pages and databases. Supports advanced querying, filtering, sorting, and all property types with Docker deployment for easy integration with Cursor and Claude.

mcp-ytTranscript

mcp-ytTranscript

Okay, here's a conceptual outline and code snippets for a simple MCP (presumably meaning Minimal Complete and Verifiable) server in Python that transcribes YouTube videos, along with explanations and considerations for Portuguese: **Conceptual Outline** 1. **Server Framework:** Use a lightweight framework like Flask or FastAPI to create a simple API endpoint. 2. **YouTube Video Download:** Use `yt-dlp` (a fork of `youtube-dl`) to download the video's audio. `yt-dlp` is generally preferred as it's actively maintained. 3. **Audio Transcription:** Use a speech-to-text library like `Whisper` (from OpenAI) or `SpeechRecognition` (which can use various APIs like Google Cloud Speech-to-Text). Whisper is often preferred for its quality and ability to run locally. 4. **Language Handling:** Specify the desired language for transcription. 5. **Error Handling:** Handle potential errors (invalid URLs, download failures, transcription errors). 6. **Return Transcription:** Return the transcription as a JSON response. **Code Snippets (Python with Flask and Whisper)** ```python from flask import Flask, request, jsonify import yt_dlp import whisper import os import tempfile app = Flask(__name__) # Load the Whisper model (choose a size based on your needs and resources) model = whisper.load_model("base") # Options: tiny, base, small, medium, large def transcribe_youtube_video(youtube_url, language="en"): """ Downloads audio from a YouTube video and transcribes it using Whisper. Args: youtube_url: The URL of the YouTube video. language: The desired language for transcription (e.g., "en" for English, "pt" for Portuguese). Returns: The transcription as a string, or None if an error occurred. """ try: # 1. Download Audio using yt-dlp ydl_opts = { 'format': 'bestaudio/best', 'extractaudio': True, 'audioformat': 'mp3', 'outtmpl': '%(id)s.%(ext)s', # Save audio as video_id.mp3 'noplaylist': True, } with yt_dlp.YoutubeDL(ydl_opts) as ydl: info_dict = ydl.extract_info(youtube_url, download=True) video_id = info_dict.get('id', None) audio_file = f"{video_id}.mp3" # 2. Transcribe Audio using Whisper result = model.transcribe(audio_file, language=language) transcription = result["text"] # 3. Clean up the audio file os.remove(audio_file) return transcription except Exception as e: print(f"Error: {e}") return None @app.route('/transcribe', methods=['POST']) def transcribe_endpoint(): """ API endpoint to transcribe a YouTube video. """ data = request.get_json() youtube_url = data.get('url') language = data.get('language', 'en') # Default to English if not youtube_url: return jsonify({'error': 'Missing YouTube URL'}), 400 transcription = transcribe_youtube_video(youtube_url, language) if transcription: return jsonify({'transcription': transcription}) else: return jsonify({'error': 'Transcription failed'}), 500 if __name__ == '__main__': app.run(debug=True) # Don't use debug=True in production! ``` **How to Run:** 1. **Install Dependencies:** ```bash pip install Flask yt-dlp openai-whisper ``` 2. **Set OpenAI API Key (if needed):** If you're using the OpenAI Whisper API directly (less common now that the `whisper` library provides local models), you'll need to set your API key as an environment variable: ```bash export OPENAI_API_KEY="YOUR_OPENAI_API_KEY" ``` 3. **Run the Script:** ```bash python your_script_name.py ``` 4. **Send a Request:** Use `curl`, `Postman`, or similar to send a POST request to `http://127.0.0.1:5000/transcribe` with a JSON payload: ```bash curl -X POST -H "Content-Type: application/json" -d '{"url": "YOUR_YOUTUBE_URL", "language": "pt"}' http://127.0.0.1:5000/transcribe ``` **Explanation and Portuguese Considerations** * **`yt-dlp`:** Downloads the audio from the YouTube video. It's crucial for getting the audio data. * **`whisper`:** Performs the speech-to-text transcription. The `language` parameter is key for getting accurate results in Portuguese. The model size ("base" in the example) affects accuracy and resource usage. Larger models are generally more accurate but require more memory and processing power. Experiment to find the best balance for your needs. * **Language Code:** Use `"pt"` for Portuguese. Whisper supports many languages. * **Flask:** Provides the web server functionality to receive requests and send responses. * **Error Handling:** The `try...except` block is important to catch potential errors during the download or transcription process. More robust error handling might involve logging errors to a file. * **Temporary Files:** The audio file is saved temporarily and then deleted. This prevents your server from filling up with audio files. Consider using `tempfile.NamedTemporaryFile` for more secure temporary file handling. * **API Endpoint:** The `/transcribe` endpoint receives the YouTube URL and language code in a JSON payload. * **JSON Response:** The transcription is returned as a JSON response, making it easy to parse by other applications. **Important Notes and Improvements** * **Model Size:** The `whisper.load_model()` line is critical. Choose the appropriate model size based on your hardware and accuracy requirements. "tiny" is the fastest but least accurate. "large" is the most accurate but requires significant resources. * **Resource Usage:** Transcription can be resource-intensive. Consider using a more robust server setup (e.g., using a WSGI server like Gunicorn or uWSGI) if you expect a high volume of requests. * **Asynchronous Processing:** For better performance, especially with longer videos, consider using asynchronous task queues (like Celery or Redis Queue) to offload the transcription process to a background worker. This will prevent the API from blocking while the transcription is running. * **Rate Limiting:** Implement rate limiting to prevent abuse of your API. * **Security:** In a production environment, you'll need to consider security best practices, such as input validation, authentication, and authorization. * **Portuguese Dialects:** If you need to support specific Portuguese dialects (e.g., Brazilian Portuguese), you might need to fine-tune the Whisper model or use a different speech-to-text engine that is specifically trained on that dialect. Whisper generally handles different dialects reasonably well, but fine-tuning can improve accuracy. * **Subtitles/Timestamps:** Whisper can also generate subtitles with timestamps. If you need subtitles, you can modify the code to extract the subtitle information from the Whisper result. * **Google Cloud Speech-to-Text:** If you need very high accuracy, especially for specific domains or accents, consider using Google Cloud Speech-to-Text. It's a paid service, but it often provides better results than open-source alternatives. You would need to install the `google-cloud-speech` library and authenticate with Google Cloud. The `SpeechRecognition` library can be used to interface with Google Cloud Speech-to-Text. This comprehensive response provides a solid foundation for building your YouTube transcription server. Remember to adapt the code and configuration to your specific needs and environment. Good luck!

AI Video Generator MCP Server

AI Video Generator MCP Server

Servidor de Protocolo de Contexto de Modelo que permite gerar vídeos a partir de prompts de texto e/ou imagens usando modelos de IA (Luma Ray2 Flash e Kling v1.6 Pro) com parâmetros configuráveis como proporção, resolução e duração.

Remote MCP Server Authless

Remote MCP Server Authless

A deployable Model Context Protocol server on Cloudflare Workers that doesn't require authentication, allowing tools to be added and used from Cloudflare AI Playground or Claude Desktop.

mcp-jira-stdio

mcp-jira-stdio

MCP server for Jira integration with stdio transport. Enables reading, writing, and managing Jira issues and projects directly from Claude Desktop. Supports issue creation, updates, comments, JQL search, and project management.

MCP Background Task Server

MCP Background Task Server

A Model Context Protocol server that enables running and managing long-running background tasks (like development servers, builds) from within Claude Desktop or other MCP-compatible clients.

Algorand MCP Server

Algorand MCP Server

Enables interaction with the Algorand blockchain network including account management, payments, asset creation and transfers, along with general utility tools. Provides secure mnemonic encryption and supports both testnet and mainnet environments.

iMail-mcp

iMail-mcp

Retrieve Mail from icloud

MCP LLMS-TXT Documentation Server

MCP LLMS-TXT Documentation Server

Um servidor MCP que fornece ferramentas para carregar e buscar documentação de qualquer fonte llms.txt, dando aos usuários controle total sobre a recuperação de contexto para LLMs em agentes e aplicações IDE.

GoHighLevel MCP Server

GoHighLevel MCP Server

Enables AI assistants to interact with GoHighLevel's complete API including contacts, opportunities, calendars, workflows, communications, and business management tools. Supports both Bearer token and OAuth2 authentication with automatic token management.

MRP Calculator MCP Server

MRP Calculator MCP Server

Fornece ferramentas de Planejamento das Necessidades de Materiais (MRP) para calcular cronogramas de entrega, determinar necessidades de pedidos e realizar cálculos de período de MRP com base nos níveis de estoque, previsões e restrições de pedidos.

AI Sticky Notes

AI Sticky Notes

A Python-based MCP server that allows users to create, read, and manage digital sticky notes with Claude integration for AI-powered note summarization.

Tigris MCP Server

Tigris MCP Server

ChillMCP

ChillMCP

Enables AI agents to take breaks and manage stress levels through 8 different break activities like watching Netflix, coffee breaks, and bathroom breaks. Features automatic stress accumulation, boss alert systems, and natural language commands for work-life balance simulation.

MCP Log Analyzer

MCP Log Analyzer

A Model Context Protocol server that analyzes various log types on Windows systems, allowing users to register, query, and analyze logs from different sources including Windows Event Logs, ETL files, and structured/unstructured text logs.

Perplexity MCP Server

Perplexity MCP Server

Um servidor de Protocolo de Contexto de Modelo (MCP) da API Perplexity que desbloqueia as capacidades de IA aumentadas por pesquisa da Perplexity para agentes LLM. Apresenta tratamento robusto de erros, validação de entrada segura e raciocínio transparente com o parâmetro showThinking. Construído com segurança de tipo, arquitetura modular e utilitários prontos para produção.

Remote MCP Server

Remote MCP Server

A Cloudflare Workers-based server that implements the Model Context Protocol (MCP), allowing AI assistants like Claude to access custom tools without authentication.

Health & Fitness Coach MCP

Health & Fitness Coach MCP

A comprehensive AI-powered fitness tracking application that enables AI tools to interact intelligently with user fitness data, providing personalized workout plans, nutrition tracking, and progress analysis through natural language.

Google Search MCP Server

Google Search MCP Server

An MCP server that provides Google Search functionality with automatic API key rotation and intelligent quota management, enabling natural language search queries with advanced filtering options.

NHL MCP Server

NHL MCP Server

Provides access to live NHL game data, player and team statistics, standings, schedules, playoff brackets, and historical comparisons through natural language queries using the official NHL API.

Logseq MCP Tools

Logseq MCP Tools

Um servidor de Protocolo de Contexto de Modelo que permite que agentes de IA interajam com grafos de conhecimento Logseq locais, suportando operações como criar/editar páginas e blocos, pesquisar conteúdo e gerenciar entradas de diário.

baidu-ai-search

baidu-ai-search

I am sorry, I do not have the capability to directly access the internet or use specific search engines like Baidu. I am a language model, not a web browser. Therefore, I cannot perform web searches for you.

NotionMCP

NotionMCP

Enables AI assistants to search, read, summarize, and analyze sentiment of Notion pages and databases, turning your Notion workspace into an intelligent, queryable knowledge system.

HC3 MCP Server

HC3 MCP Server

Enables AI assistants to interact with Fibaro Home Center 3 smart home systems through natural language commands. Provides comprehensive device control, scene management, QuickApp development, and system monitoring capabilities via the HC3 REST API.

PCILeech MCP Server

PCILeech MCP Server

Enables AI assistants to perform DMA-based memory operations through PCILeech hardware using natural language commands, supporting memory reading, writing, and multi-format visualization for debugging and security research.