Discover Awesome MCP Servers

Extend your agent with 24,070 capabilities via MCP servers.

All24,070
MPC Docs Server

MPC Docs Server

Um servidor simples de Protocolo de Contexto de Modelo que permite pesquisar e recuperar trechos de documentação relevantes das documentações oficiais do Langchain, Llama Index e OpenAI.

OParl MCP Server

OParl MCP Server

Provides AI models with seamless access to OParl parliamentary data APIs through the Model Context Protocol. Enables natural language queries for parliamentary meetings, documents, organizations, representatives, and other government data across multiple OParl implementations.

Detox MCP

Detox MCP

Integrates the Detox mobile testing framework with Claude, enabling test execution, configuration management, code generation, and device management for React Native E2E testing.

Expense Tracker MCP Server

Expense Tracker MCP Server

An AI-powered financial management engine that enables budgeting, smart expense tracking, and affordability analytics via the Model Context Protocol. It allows AI assistants to interact with financial data through natural language for tasks like category detection, bulk expense ingestion, and budget impact predictions.

AlphaGenome MCP Server

AlphaGenome MCP Server

Enables AI-powered genomic variant analysis including variant impact prediction, regulatory element discovery, and batch variant scoring. Currently operates in mock mode as a proof-of-concept awaiting the public release of Google DeepMind's AlphaGenome API.

MCP-123

MCP-123

A minimal Python package for easily setting up and running MCP servers and clients, allowing functions to be automatically exposed as tools that LLMs can use with just 2 lines of code.

Rust MCP Stdio Server Test

Rust MCP Stdio Server Test

SDK não oficial para servidores MCP em Rust

Midjourney Best Experience MCP Server

Midjourney Best Experience MCP Server

An MCP server that provides access to the Midjourney Best Experience API for generating and manipulating images. It enables users to execute prompts and perform actions like upscaling, variations, and zooming directly within MCP-compatible applications.

MCP Database Server

MCP Database Server

Espelho de

Travel MCP Server

Travel MCP Server

A Model Context Protocol server that enables flight searching, booking, and payment processing. It also provides access to resources like user profiles, pricing rules, and company travel policies.

MLX Whisper MCP Server

MLX Whisper MCP Server

Local MCP server for MLX Whisper transcription

FastMCP Server Template

FastMCP Server Template

A production-ready MCP server template that enables developers to quickly build and deploy MCP servers with dynamic tool/resource loading, YAML-based prompts, and seamless OpenShift deployment. Supports both local development with hot-reload and production HTTP deployment with optional JWT authentication.

GCP Sales Analytics MCP Server

GCP Sales Analytics MCP Server

Enables querying both Google Cloud SQL (PostgreSQL) and BigQuery public datasets through an AI agent that automatically routes questions to the appropriate data source for sales and e-commerce analytics.

MCP Server transport for Hono applications

MCP Server transport for Hono applications

Google Workspace MCP Server

Google Workspace MCP Server

Enables AI assistants to manage Google Calendar events and Gmail emails through natural language. Supports creating/listing calendar events with smart color coding, sending/reading emails, and advanced search capabilities across both services.

TEST MCP USDT MCP Server

TEST MCP USDT MCP Server

Enables AI agents and LLMs to interact with the TEST MCP USDT API through standardized tools using the Model Context Protocol. It provides a containerized environment for executing USDT-related API operations and asynchronous tool handling.

Dependency Checker MCP Server

Dependency Checker MCP Server

Enables security scanning for npm dependencies by checking manifest and lockfiles against the OSV.dev and Socket.dev vulnerability databases. It provides tools to detect vulnerabilities in specific packages and retrieve detailed technical reports for identified security issues.

Hacker News

Hacker News

Parsing HTML content from a website like Hacker News and providing structured data in Portuguese requires a multi-step process. Here's a breakdown of the process, along with considerations for translation: **1. Data Extraction (Web Scraping):** * **Libraries:** You'll need a library to fetch the HTML and parse it. Popular choices in Python are: * `requests`: For fetching the HTML content. * `Beautiful Soup 4`: For parsing the HTML and navigating the DOM (Document Object Model). * `lxml`: A faster XML/HTML processing library that can be used with Beautiful Soup. * **Hacker News Structure:** Hacker News has a relatively consistent HTML structure, but it's subject to change. You'll need to inspect the HTML source code of each page (top, new, ask, show, jobs) to identify the relevant HTML elements that contain the data you want to extract. Key elements to look for include: * `<tr>` (table rows): Hacker News uses tables extensively. * `<td>` (table data cells): Data is organized within table cells. * `<a>` (anchor tags): Links to the stories and comments. * `<span>` (span tags): Often used for points, comments, and other metadata. * **Extraction Logic:** You'll write code to: 1. Fetch the HTML content of the desired Hacker News page (e.g., `https://news.ycombinator.com/`). 2. Parse the HTML using Beautiful Soup. 3. Locate the relevant HTML elements based on their tags, classes, or IDs. 4. Extract the data from those elements (e.g., story title, URL, points, number of comments, author, age). 5. Organize the extracted data into a structured format (e.g., a list of dictionaries). **2. Data Structuring:** * **Data Model:** Define a data structure to represent each story. For example: ```python class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type # "top", "new", "ask", "show", "jobs" def __str__(self): return f"Title: {self.title}, URL: {self.url}, Points: {self.points}, Comments: {self.comments}, Author: {self.author}, Age: {self.age}, Type: {self.story_type}" ``` * **Data Cleaning:** Clean the extracted data. This might involve: * Removing extra whitespace. * Converting strings to numbers (e.g., points, comments). * Handling missing data (e.g., stories without URLs). **3. Translation to Portuguese:** * **Translation Libraries/APIs:** Use a translation library or API to translate the relevant text fields (e.g., story title, author, age). Popular options include: * `googletrans`: A free and easy-to-use Python library for Google Translate. (Note: Google Translate's free tier has limitations and may be unreliable for large-scale use.) * `DeepL API`: A paid API that offers high-quality translations. * `Microsoft Translator API`: Another paid API option. * `translate-toolkit`: A more complex but powerful library for localization and translation. * **Translation Logic:** 1. Identify the text fields that need to be translated. 2. Use the chosen translation library/API to translate each field from English to Portuguese. 3. Replace the original English text with the translated Portuguese text in your data structure. **4. Output:** * **Format:** Choose a format for outputting the structured data. Common options include: * JSON: A widely used format for data exchange. * CSV: Suitable for tabular data. * Python dictionaries/lists: If you're using the data within a Python application. * **Display/Storage:** Decide how you want to display or store the translated data. This could involve: * Printing it to the console. * Saving it to a file. * Displaying it in a web application. * Storing it in a database. **Example Code (Python with Beautiful Soup and `googletrans` - for demonstration purposes only, `googletrans` may have issues):** ```python import requests from bs4 import BeautifulSoup from googletrans import Translator # pip install googletrans==4.0.0-rc1 class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type def __str__(self): return f"Título: {self.title}, URL: {self.url}, Pontos: {self.points}, Comentários: {self.comments}, Autor: {self.author}, Idade: {self.age}, Tipo: {self.story_type}" def scrape_hacker_news(url, story_type): try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') stories = [] rows = soup.find_all('tr', class_='athing') for row in rows: title_element = row.find('a', class_='storylink') if not title_element: continue # Skip rows without a title link title = title_element.text.strip() url = title_element['href'] # Find the next row containing the story details (points, author, comments) details_row = row.find_next_sibling('tr') if not details_row: continue score_element = details_row.find('span', class_='score') points = int(score_element.text.split()[0]) if score_element else 0 age_element = details_row.find('span', class_='age') age = age_element.text.strip() if age_element else "Unknown" author_element = details_row.find('a', class_='hnuser') author = author_element.text.strip() if author_element else "Unknown" comments_link = details_row.find_all('a')[-1] # Last link is usually the comments link comments_text = comments_link.text.strip() comments = 0 if 'comment' in comments_text: try: comments = int(comments_text.split('\xa0')[0]) #Handles the &nbsp; character except ValueError: comments = 0 story = Story(title, url, points, comments, author, age, story_type) stories.append(story) return stories except requests.exceptions.RequestException as e: print(f"Error fetching URL: {e}") return [] except Exception as e: print(f"Error parsing HTML: {e}") return [] def translate_story(story, translator): try: story.title = translator.translate(story.title, dest='pt').text story.author = translator.translate(story.author, dest='pt').text story.age = translator.translate(story.age, dest='pt').text return story except Exception as e: print(f"Translation error: {e}") return story if __name__ == "__main__": top_stories_url = "https://news.ycombinator.com/" new_stories_url = "https://news.ycombinator.com/newest" # ... other URLs for ask, show, jobs translator = Translator() top_stories = scrape_hacker_news(top_stories_url, "top") new_stories = scrape_hacker_news(new_stories_url, "new") all_stories = top_stories + new_stories for story in all_stories: translated_story = translate_story(story, translator) print(translated_story) ``` **Important Considerations:** * **Website Terms of Service:** Always check the website's terms of service before scraping. Hacker News might have rules against scraping or rate limits. Respect their robots.txt file. * **HTML Structure Changes:** Websites change their HTML structure frequently. Your scraper will need to be updated whenever the Hacker News HTML changes. Use robust selectors that are less likely to break. * **Rate Limiting:** Be mindful of rate limiting. Don't make too many requests in a short period of time, or you might get blocked. Implement delays between requests. * **Error Handling:** Implement robust error handling to catch exceptions and prevent your scraper from crashing. * **Translation Quality:** Machine translation is not perfect. The quality of the translation will depend on the complexity of the text and the capabilities of the translation service. Consider using a professional translator for critical applications. * **`googletrans` Issues:** The `googletrans` library has been known to have issues with reliability and rate limiting. It's often better to use a paid translation API for production environments. * **Character Encoding:** Ensure that you're handling character encoding correctly, especially when dealing with non-ASCII characters. UTF-8 is generally a good choice. * **Asynchronous Requests:** For faster scraping, consider using asynchronous requests with libraries like `aiohttp`. This comprehensive guide provides a solid foundation for building a Hacker News scraper and translator. Remember to adapt the code and techniques to your specific needs and always respect the website's terms of service. Good luck!

Cursor Agent MCP Server

Cursor Agent MCP Server

Enables cost-effective repository analysis, code search, file editing, and task planning by wrapping the cursor-agent CLI through focused tools. Reduces token usage by offloading heavy thinking tasks from Claude to specialized operations with configurable output formats.

Toy MCP Server

Toy MCP Server

A simple MCP server that provides two tools: one for counting specific letter occurrences within words and another for generating UUIDv7s.

MCP Server Demo

MCP Server Demo

UI Flowchart Creator

UI Flowchart Creator

Permite que os usuários criem fluxogramas de interface do usuário, gerando visualizações de interfaces de usuário e interações por meio de uma API fácil de usar em sistemas compatíveis com MCP.

Supabase Notes

Supabase Notes

Um servidor MCP baseado em TypeScript para lidar com um sistema simples de notas com NextJS que suporta a criação e o resumo de notas de texto usando conceitos de MCP.

Radare2 MCP Server

Radare2 MCP Server

Servidor stdio MCP para radare2

Tello Drone MCP Server

Tello Drone MCP Server

A MCP server for controlling a drone

Security Context MCP Server

Security Context MCP Server

Provides instant access to authoritative security documentation from organizations like OWASP, NIST, and major cloud providers through natural language semantic search. It enables users to retrieve security best practices, frameworks, and vulnerability information directly from a locally cached knowledge base.

MCP Commute Assistant

MCP Commute Assistant

A smart commute assistant that monitors travel routes via the Amap API and sends automated notifications to DingTalk. It enables users to schedule daily route checks and receive real-time traffic updates using a modular MCP-based architecture.

MCP Send to Feishu Server

MCP Send to Feishu Server

Notion ReadOnly MCP Server

Notion ReadOnly MCP Server

An optimized read-only server for AI assistants to efficiently query and retrieve Notion content, featuring parallel processing and a minimized toolset of 7 essential read-only operations.

Playwright MCP Server

Playwright MCP Server

Provides browser automation capabilities using Playwright, enabling users to navigate websites, extract content, take screenshots, and interact with web pages through natural language prompts.