Discover Awesome MCP Servers

Extend your agent with 28,691 capabilities via MCP servers.

All28,691
mcp-server

mcp-server

OpenAPI x MCP

OpenAPI x MCP

CLI para criar servidores MCP (Minecraft Coder Pack).

Google Workspace MCP Server

Google Workspace MCP Server

Enables AI assistants to manage Google Calendar events and Gmail emails through natural language. Supports creating/listing calendar events with smart color coding, sending/reading emails, and advanced search capabilities across both services.

TEST MCP USDT MCP Server

TEST MCP USDT MCP Server

Enables AI agents and LLMs to interact with the TEST MCP USDT API through standardized tools using the Model Context Protocol. It provides a containerized environment for executing USDT-related API operations and asynchronous tool handling.

Hacker News

Hacker News

Parsing HTML content from a website like Hacker News and providing structured data in Portuguese requires a multi-step process. Here's a breakdown of the process, along with considerations for translation: **1. Data Extraction (Web Scraping):** * **Libraries:** You'll need a library to fetch the HTML and parse it. Popular choices in Python are: * `requests`: For fetching the HTML content. * `Beautiful Soup 4`: For parsing the HTML and navigating the DOM (Document Object Model). * `lxml`: A faster XML/HTML processing library that can be used with Beautiful Soup. * **Hacker News Structure:** Hacker News has a relatively consistent HTML structure, but it's subject to change. You'll need to inspect the HTML source code of each page (top, new, ask, show, jobs) to identify the relevant HTML elements that contain the data you want to extract. Key elements to look for include: * `<tr>` (table rows): Hacker News uses tables extensively. * `<td>` (table data cells): Data is organized within table cells. * `<a>` (anchor tags): Links to the stories and comments. * `<span>` (span tags): Often used for points, comments, and other metadata. * **Extraction Logic:** You'll write code to: 1. Fetch the HTML content of the desired Hacker News page (e.g., `https://news.ycombinator.com/`). 2. Parse the HTML using Beautiful Soup. 3. Locate the relevant HTML elements based on their tags, classes, or IDs. 4. Extract the data from those elements (e.g., story title, URL, points, number of comments, author, age). 5. Organize the extracted data into a structured format (e.g., a list of dictionaries). **2. Data Structuring:** * **Data Model:** Define a data structure to represent each story. For example: ```python class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type # "top", "new", "ask", "show", "jobs" def __str__(self): return f"Title: {self.title}, URL: {self.url}, Points: {self.points}, Comments: {self.comments}, Author: {self.author}, Age: {self.age}, Type: {self.story_type}" ``` * **Data Cleaning:** Clean the extracted data. This might involve: * Removing extra whitespace. * Converting strings to numbers (e.g., points, comments). * Handling missing data (e.g., stories without URLs). **3. Translation to Portuguese:** * **Translation Libraries/APIs:** Use a translation library or API to translate the relevant text fields (e.g., story title, author, age). Popular options include: * `googletrans`: A free and easy-to-use Python library for Google Translate. (Note: Google Translate's free tier has limitations and may be unreliable for large-scale use.) * `DeepL API`: A paid API that offers high-quality translations. * `Microsoft Translator API`: Another paid API option. * `translate-toolkit`: A more complex but powerful library for localization and translation. * **Translation Logic:** 1. Identify the text fields that need to be translated. 2. Use the chosen translation library/API to translate each field from English to Portuguese. 3. Replace the original English text with the translated Portuguese text in your data structure. **4. Output:** * **Format:** Choose a format for outputting the structured data. Common options include: * JSON: A widely used format for data exchange. * CSV: Suitable for tabular data. * Python dictionaries/lists: If you're using the data within a Python application. * **Display/Storage:** Decide how you want to display or store the translated data. This could involve: * Printing it to the console. * Saving it to a file. * Displaying it in a web application. * Storing it in a database. **Example Code (Python with Beautiful Soup and `googletrans` - for demonstration purposes only, `googletrans` may have issues):** ```python import requests from bs4 import BeautifulSoup from googletrans import Translator # pip install googletrans==4.0.0-rc1 class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type def __str__(self): return f"Título: {self.title}, URL: {self.url}, Pontos: {self.points}, Comentários: {self.comments}, Autor: {self.author}, Idade: {self.age}, Tipo: {self.story_type}" def scrape_hacker_news(url, story_type): try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') stories = [] rows = soup.find_all('tr', class_='athing') for row in rows: title_element = row.find('a', class_='storylink') if not title_element: continue # Skip rows without a title link title = title_element.text.strip() url = title_element['href'] # Find the next row containing the story details (points, author, comments) details_row = row.find_next_sibling('tr') if not details_row: continue score_element = details_row.find('span', class_='score') points = int(score_element.text.split()[0]) if score_element else 0 age_element = details_row.find('span', class_='age') age = age_element.text.strip() if age_element else "Unknown" author_element = details_row.find('a', class_='hnuser') author = author_element.text.strip() if author_element else "Unknown" comments_link = details_row.find_all('a')[-1] # Last link is usually the comments link comments_text = comments_link.text.strip() comments = 0 if 'comment' in comments_text: try: comments = int(comments_text.split('\xa0')[0]) #Handles the &nbsp; character except ValueError: comments = 0 story = Story(title, url, points, comments, author, age, story_type) stories.append(story) return stories except requests.exceptions.RequestException as e: print(f"Error fetching URL: {e}") return [] except Exception as e: print(f"Error parsing HTML: {e}") return [] def translate_story(story, translator): try: story.title = translator.translate(story.title, dest='pt').text story.author = translator.translate(story.author, dest='pt').text story.age = translator.translate(story.age, dest='pt').text return story except Exception as e: print(f"Translation error: {e}") return story if __name__ == "__main__": top_stories_url = "https://news.ycombinator.com/" new_stories_url = "https://news.ycombinator.com/newest" # ... other URLs for ask, show, jobs translator = Translator() top_stories = scrape_hacker_news(top_stories_url, "top") new_stories = scrape_hacker_news(new_stories_url, "new") all_stories = top_stories + new_stories for story in all_stories: translated_story = translate_story(story, translator) print(translated_story) ``` **Important Considerations:** * **Website Terms of Service:** Always check the website's terms of service before scraping. Hacker News might have rules against scraping or rate limits. Respect their robots.txt file. * **HTML Structure Changes:** Websites change their HTML structure frequently. Your scraper will need to be updated whenever the Hacker News HTML changes. Use robust selectors that are less likely to break. * **Rate Limiting:** Be mindful of rate limiting. Don't make too many requests in a short period of time, or you might get blocked. Implement delays between requests. * **Error Handling:** Implement robust error handling to catch exceptions and prevent your scraper from crashing. * **Translation Quality:** Machine translation is not perfect. The quality of the translation will depend on the complexity of the text and the capabilities of the translation service. Consider using a professional translator for critical applications. * **`googletrans` Issues:** The `googletrans` library has been known to have issues with reliability and rate limiting. It's often better to use a paid translation API for production environments. * **Character Encoding:** Ensure that you're handling character encoding correctly, especially when dealing with non-ASCII characters. UTF-8 is generally a good choice. * **Asynchronous Requests:** For faster scraping, consider using asynchronous requests with libraries like `aiohttp`. This comprehensive guide provides a solid foundation for building a Hacker News scraper and translator. Remember to adapt the code and techniques to your specific needs and always respect the website's terms of service. Good luck!

Cursor Agent MCP Server

Cursor Agent MCP Server

Enables cost-effective repository analysis, code search, file editing, and task planning by wrapping the cursor-agent CLI through focused tools. Reduces token usage by offloading heavy thinking tasks from Claude to specialized operations with configurable output formats.

Toy MCP Server

Toy MCP Server

A simple MCP server that provides two tools: one for counting specific letter occurrences within words and another for generating UUIDv7s.

lordicon-mcp

lordicon-mcp

An MCP server that allows AI agents to search, preview, and generate embed code for Lordicon animated icons. It provides tools for browsing icon families and generating ready-to-use snippets in HTML, React, and web component formats.

SynapBus

SynapBus

MCP-native agent-to-agent messaging hub for AI swarms. Agents communicate via channels and DMs through MCP protocol — zero SDKs needed. Includes web UI, semantic search, analytics dashboard. Single Go binary, local-first.

MCP Server Demo

MCP Server Demo

OParl MCP Server

OParl MCP Server

Provides AI models with seamless access to OParl parliamentary data APIs through the Model Context Protocol. Enables natural language queries for parliamentary meetings, documents, organizations, representatives, and other government data across multiple OParl implementations.

Expense Tracker MCP Server

Expense Tracker MCP Server

An AI-powered financial management engine that enables budgeting, smart expense tracking, and affordability analytics via the Model Context Protocol. It allows AI assistants to interact with financial data through natural language for tasks like category detection, bulk expense ingestion, and budget impact predictions.

Volterra Knowledge Engine

Volterra Knowledge Engine

A read-only Model Context Protocol server that exposes a semantic knowledge base to AI agents via 27 tools. It enables querying of documents and data integrated from sources like Notion, SharePoint, HubSpot, and Slack.

icogenie-mcp

icogenie-mcp

AI-powered SVG icon generation MCP server. Generate production-ready SVG icons from text descriptions with customizable styles, sizes, and themes.

Python Codebase Analysis RAG System

Python Codebase Analysis RAG System

An MCP server that analyzes Python codebases using AST, stores code elements in a vector database, and enables natural language queries about code structure and functionality using RAG with Google's Gemini models.

Daytona MCP Python Interpreter

Daytona MCP Python Interpreter

A Model Context Protocol server that allows execution of Python code within Daytona workspaces, providing a secure and isolated environment for executing and managing Python scripts.

Brawl Stars MCP

Brawl Stars MCP

Provides AI assistants with access to real-time Brawl Stars game data including player statistics, club information, brawler details, battle logs, and current events through the Brawl Stars API.

MCP Spotify AI Assistant

MCP Spotify AI Assistant

Enables Claude to control Spotify features including playback control, playlist management, search, and accessing user's listening history and preferences through the Spotify API.

MeiGen AI Image Generation MCP

MeiGen AI Image Generation MCP

Turn Claude Code / OpenClaw into your local Lovart. Local ComfyUI, 1,400+ prompt library, multi-direction parallel generation.

Neo4j MCP Chainlit

Neo4j MCP Chainlit

Uma prova de conceito de um servidor MCP Neo4j com Chainlit (host MCP) e API Claude LLM (Anthropic).

Feedbucket MCP Server

Feedbucket MCP Server

Enables AI assistants to manage website feedback from Feedbucket, including listing, filtering, commenting on, and resolving feedback items through natural language interactions.

Content Creator Mcp

Content Creator Mcp

Servidor MCP que implementa um agente de criador de conteúdo.

Pokémon MCP Server

Pokémon MCP Server

Enables AI models to access comprehensive Pokémon data from PokéAPI and simulate battles between any two Pokémon with realistic mechanics including type effectiveness, stat-based damage calculations, and status effects.

PostgreSQL MCP Server

PostgreSQL MCP Server

A production-ready MCP server that enables safe, read-only SQL SELECT queries against PostgreSQL databases with built-in security validation. It features connection pooling, automatic row limits, and structured logging to ensure secure and reliable database interactions.

Prompt Cleaner MCP Server

Prompt Cleaner MCP Server

Enables cleaning and sanitizing prompts through an LLM-powered tool that removes sensitive information, provides structured feedback with notes and risks, and normalizes prompt formatting. Supports configurable local or remote OpenAI-compatible APIs with automatic secret redaction.

AlphaGenome MCP Server

AlphaGenome MCP Server

Enables AI-powered genomic variant analysis including variant impact prediction, regulatory element discovery, and batch variant scoring. Currently operates in mock mode as a proof-of-concept awaiting the public release of Google DeepMind's AlphaGenome API.

MCP Dataverse Server

MCP Dataverse Server

Enables CRUD operations and schema exploration on Microsoft Dataverse databases using service principal authentication. It allows users to query records with OData filters, manage table entries, and retrieve metadata through a standardized MCP interface.

Midjourney Best Experience MCP Server

Midjourney Best Experience MCP Server

An MCP server that provides access to the Midjourney Best Experience API for generating and manipulating images. It enables users to execute prompts and perform actions like upscaling, variations, and zooming directly within MCP-compatible applications.

Travel MCP Server

Travel MCP Server

A Model Context Protocol server that enables flight searching, booking, and payment processing. It also provides access to resources like user profiles, pricing rules, and company travel policies.

MLX Whisper MCP Server

MLX Whisper MCP Server

Local MCP server for MLX Whisper transcription