Discover Awesome MCP Servers

Extend your agent with 14,529 capabilities via MCP servers.

All14,529
Tokens MCP

Tokens MCP

Servidor MCP para métricas de token

TAPD MCP Server

TAPD MCP Server

MCP Inspector Server

MCP Inspector Server

A basic MCP server template that provides a foundation for building custom tools, resources, and prompts. Serves as a starting point for developers to create their own MCP server functionality.

blender-mcp

blender-mcp

Servidor MCP para executar código no Blender

Hive Intelligence

Hive Intelligence

The Ultimate Cryptocurrency MCP for AI Assistants - Unified access to crypto, DeFi, and Web3 analytics | MCP Endpoint.

MCP-123

MCP-123

A minimal Python package for easily setting up and running MCP servers and clients, allowing functions to be automatically exposed as tools that LLMs can use with just 2 lines of code.

Rust MCP Stdio Server Test

Rust MCP Stdio Server Test

SDK não oficial para servidores MCP em Rust

trash-cleaner-mcp-server

trash-cleaner-mcp-server

A desktop MCP service that scans for and removes junk files to keep your system clean and optimized.

Playwright MCP

Playwright MCP

A Model Context Protocol server that provides browser automation capabilities using Playwright, enabling LLMs to interact with web pages through structured accessibility snapshots without requiring screenshots or visually-tuned models.

UI Flowchart Creator

UI Flowchart Creator

Permite que os usuários criem fluxogramas de interface do usuário, gerando visualizações de interfaces de usuário e interações por meio de uma API fácil de usar em sistemas compatíveis com MCP.

Google Workspace MCP Server

Google Workspace MCP Server

Enables AI assistants to manage Google Calendar events and Gmail emails through natural language. Supports creating/listing calendar events with smart color coding, sending/reading emails, and advanced search capabilities across both services.

Tello Drone MCP Server

Tello Drone MCP Server

A MCP server for controlling a drone

Hacker News

Hacker News

Parsing HTML content from a website like Hacker News and providing structured data in Portuguese requires a multi-step process. Here's a breakdown of the process, along with considerations for translation: **1. Data Extraction (Web Scraping):** * **Libraries:** You'll need a library to fetch the HTML and parse it. Popular choices in Python are: * `requests`: For fetching the HTML content. * `Beautiful Soup 4`: For parsing the HTML and navigating the DOM (Document Object Model). * `lxml`: A faster XML/HTML processing library that can be used with Beautiful Soup. * **Hacker News Structure:** Hacker News has a relatively consistent HTML structure, but it's subject to change. You'll need to inspect the HTML source code of each page (top, new, ask, show, jobs) to identify the relevant HTML elements that contain the data you want to extract. Key elements to look for include: * `<tr>` (table rows): Hacker News uses tables extensively. * `<td>` (table data cells): Data is organized within table cells. * `<a>` (anchor tags): Links to the stories and comments. * `<span>` (span tags): Often used for points, comments, and other metadata. * **Extraction Logic:** You'll write code to: 1. Fetch the HTML content of the desired Hacker News page (e.g., `https://news.ycombinator.com/`). 2. Parse the HTML using Beautiful Soup. 3. Locate the relevant HTML elements based on their tags, classes, or IDs. 4. Extract the data from those elements (e.g., story title, URL, points, number of comments, author, age). 5. Organize the extracted data into a structured format (e.g., a list of dictionaries). **2. Data Structuring:** * **Data Model:** Define a data structure to represent each story. For example: ```python class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type # "top", "new", "ask", "show", "jobs" def __str__(self): return f"Title: {self.title}, URL: {self.url}, Points: {self.points}, Comments: {self.comments}, Author: {self.author}, Age: {self.age}, Type: {self.story_type}" ``` * **Data Cleaning:** Clean the extracted data. This might involve: * Removing extra whitespace. * Converting strings to numbers (e.g., points, comments). * Handling missing data (e.g., stories without URLs). **3. Translation to Portuguese:** * **Translation Libraries/APIs:** Use a translation library or API to translate the relevant text fields (e.g., story title, author, age). Popular options include: * `googletrans`: A free and easy-to-use Python library for Google Translate. (Note: Google Translate's free tier has limitations and may be unreliable for large-scale use.) * `DeepL API`: A paid API that offers high-quality translations. * `Microsoft Translator API`: Another paid API option. * `translate-toolkit`: A more complex but powerful library for localization and translation. * **Translation Logic:** 1. Identify the text fields that need to be translated. 2. Use the chosen translation library/API to translate each field from English to Portuguese. 3. Replace the original English text with the translated Portuguese text in your data structure. **4. Output:** * **Format:** Choose a format for outputting the structured data. Common options include: * JSON: A widely used format for data exchange. * CSV: Suitable for tabular data. * Python dictionaries/lists: If you're using the data within a Python application. * **Display/Storage:** Decide how you want to display or store the translated data. This could involve: * Printing it to the console. * Saving it to a file. * Displaying it in a web application. * Storing it in a database. **Example Code (Python with Beautiful Soup and `googletrans` - for demonstration purposes only, `googletrans` may have issues):** ```python import requests from bs4 import BeautifulSoup from googletrans import Translator # pip install googletrans==4.0.0-rc1 class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type def __str__(self): return f"Título: {self.title}, URL: {self.url}, Pontos: {self.points}, Comentários: {self.comments}, Autor: {self.author}, Idade: {self.age}, Tipo: {self.story_type}" def scrape_hacker_news(url, story_type): try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') stories = [] rows = soup.find_all('tr', class_='athing') for row in rows: title_element = row.find('a', class_='storylink') if not title_element: continue # Skip rows without a title link title = title_element.text.strip() url = title_element['href'] # Find the next row containing the story details (points, author, comments) details_row = row.find_next_sibling('tr') if not details_row: continue score_element = details_row.find('span', class_='score') points = int(score_element.text.split()[0]) if score_element else 0 age_element = details_row.find('span', class_='age') age = age_element.text.strip() if age_element else "Unknown" author_element = details_row.find('a', class_='hnuser') author = author_element.text.strip() if author_element else "Unknown" comments_link = details_row.find_all('a')[-1] # Last link is usually the comments link comments_text = comments_link.text.strip() comments = 0 if 'comment' in comments_text: try: comments = int(comments_text.split('\xa0')[0]) #Handles the &nbsp; character except ValueError: comments = 0 story = Story(title, url, points, comments, author, age, story_type) stories.append(story) return stories except requests.exceptions.RequestException as e: print(f"Error fetching URL: {e}") return [] except Exception as e: print(f"Error parsing HTML: {e}") return [] def translate_story(story, translator): try: story.title = translator.translate(story.title, dest='pt').text story.author = translator.translate(story.author, dest='pt').text story.age = translator.translate(story.age, dest='pt').text return story except Exception as e: print(f"Translation error: {e}") return story if __name__ == "__main__": top_stories_url = "https://news.ycombinator.com/" new_stories_url = "https://news.ycombinator.com/newest" # ... other URLs for ask, show, jobs translator = Translator() top_stories = scrape_hacker_news(top_stories_url, "top") new_stories = scrape_hacker_news(new_stories_url, "new") all_stories = top_stories + new_stories for story in all_stories: translated_story = translate_story(story, translator) print(translated_story) ``` **Important Considerations:** * **Website Terms of Service:** Always check the website's terms of service before scraping. Hacker News might have rules against scraping or rate limits. Respect their robots.txt file. * **HTML Structure Changes:** Websites change their HTML structure frequently. Your scraper will need to be updated whenever the Hacker News HTML changes. Use robust selectors that are less likely to break. * **Rate Limiting:** Be mindful of rate limiting. Don't make too many requests in a short period of time, or you might get blocked. Implement delays between requests. * **Error Handling:** Implement robust error handling to catch exceptions and prevent your scraper from crashing. * **Translation Quality:** Machine translation is not perfect. The quality of the translation will depend on the complexity of the text and the capabilities of the translation service. Consider using a professional translator for critical applications. * **`googletrans` Issues:** The `googletrans` library has been known to have issues with reliability and rate limiting. It's often better to use a paid translation API for production environments. * **Character Encoding:** Ensure that you're handling character encoding correctly, especially when dealing with non-ASCII characters. UTF-8 is generally a good choice. * **Asynchronous Requests:** For faster scraping, consider using asynchronous requests with libraries like `aiohttp`. This comprehensive guide provides a solid foundation for building a Hacker News scraper and translator. Remember to adapt the code and techniques to your specific needs and always respect the website's terms of service. Good luck!

Cursor Agent MCP Server

Cursor Agent MCP Server

Enables cost-effective repository analysis, code search, file editing, and task planning by wrapping the cursor-agent CLI through focused tools. Reduces token usage by offloading heavy thinking tasks from Claude to specialized operations with configurable output formats.

Toy MCP Server

Toy MCP Server

A simple MCP server that provides two tools: one for counting specific letter occurrences within words and another for generating UUIDv7s.

MCP Server Demo

MCP Server Demo

MCP Send to Feishu Server

MCP Send to Feishu Server

MCP Async Server

MCP Async Server

This server implements the Model Context Protocol (MCP) to handle asynchronous tasks with real-time status tracking, robust error handling, and automatic resource management.

Facebook Ads MCP Server by CData

Facebook Ads MCP Server by CData

Facebook Ads MCP Server by CData

Diningcode MCP Server

Diningcode MCP Server

MCP SBOM Server

MCP SBOM Server

Um servidor de Protocolo de Contexto de Modelo que executa verificações do Trivy para gerar Listas de Materiais de Software (SBOM) no formato CycloneDX.

Isaac Sim MCP Extension and MCP Server

Isaac Sim MCP Extension and MCP Server

Extensão MCP e Servidor de Simulação Isaac

mcp-git

mcp-git

Provides comprehensive Git operations as tools for AI assistants and applications. This server enables AI systems to interact with Git repositories, allowing to initialize, fetch, commit, log, status, etc..

Figma MCP Server

Figma MCP Server

Permite que ferramentas de codificação com tecnologia de IA, como Cursor, Windsurf e Cline, acessem arquivos de design do Figma, possibilitando a geração de código mais precisa diretamente dos designs do Figma.

MPC Docs Server

MPC Docs Server

Um servidor simples de Protocolo de Contexto de Modelo que permite pesquisar e recuperar trechos de documentação relevantes das documentações oficiais do Langchain, Llama Index e OpenAI.

Google Ad Manager MCP Server by CData

Google Ad Manager MCP Server by CData

This read-only MCP Server allows you to connect to Google Ad Manager data from Claude Desktop through CData JDBC Drivers. Free (beta) read/write servers available at https://www.cdata.com/solutions/mcp

Secure Billing MCP Server

Secure Billing MCP Server

Enables secure interaction with billing systems through comprehensive PII/PCI redaction, field allowlisting, and multi-layer security controls. Protects sensitive financial data while providing LLMs safe access to account, subscription, and invoice information.

mcp-server

mcp-server

Base MCP Server

Base MCP Server

Fornece ferramentas on-chain para que a IA Claude interaja com a blockchain Base e a API da Coinbase, permitindo o gerenciamento de carteiras, transferências de fundos e a implantação de contratos inteligentes.

TickTick MCP Server

TickTick MCP Server

A comprehensive Model Context Protocol server providing complete TickTick task management API integration (112 operations) for Claude Code users, enabling seamless task creation, project management, habit tracking, and productivity features.