Discover Awesome MCP Servers

Extend your agent with 16,031 capabilities via MCP servers.

All16,031
Figma MCP Server

Figma MCP Server

A server implementation that enables interaction with Figma API through Model Context Protocol (MCP), supporting plugin and widget development with WebSocket communication capabilities.

Cyb MCP Server

Cyb MCP Server

An MCP server that enables AI agents to interact with the Cyber decentralized knowledge graph, allowing them to create cyberlinks between content and retrieve information from IPFS through the Cyber network.

ssp-mcp-server

ssp-mcp-server

all the tools

all the tools

test server to see how clients handle a lot of tools

OpenAPI x MCP

OpenAPI x MCP

CLI para criar servidores MCP (Minecraft Coder Pack).

Building a CSS Tutor MCP Server

Building a CSS Tutor MCP Server

Shopify Dev MCP

Shopify Dev MCP

Servidor de Protocolo de Contexto do Modelo (MCP) que interage com o Shopify Dev. Este protocolo suporta várias ferramentas para interagir com diferentes APIs do Shopify.

MCP-Creator-MCP

MCP-Creator-MCP

A meta-MCP server that helps users create new MCP servers through AI guidance, templates, and streamlined workflows, transforming ideas into production-ready implementations with minimal effort.

Mutation Clinical Trial Matching MCP

Mutation Clinical Trial Matching MCP

A Model Context Protocol server that enables Claude Desktop to search clinicaltrials.gov for matching clinical trials based on genetic mutations provided in natural language queries.

Price Monitor MCP Server

Price Monitor MCP Server

Monitors product prices by comparing database reference prices with real-time G-Market prices and sends Slack notifications when prices drop.

diff-mcp

diff-mcp

Comparar 2 textos ou dados (suporta diffs de texto, JSON, JSON5, YAML, TOML, XML e HTML). Desenvolvido por jsondiffpatch.

Context7 MCP

Context7 MCP

Fetches up-to-date, version-specific documentation and code examples from the source and adds them to your LLM prompts, helping eliminate outdated code generations and hallucinated APIs.

powergentic/azd-mcp-csharp

powergentic/azd-mcp-csharp

Um modelo AZD para implantar um servidor Model Context Protocol (MCP) escrito em C# no Azure Container Apps usando o transporte SSE.

React Native Upgrader MCP

React Native Upgrader MCP

Streamlines React Native CLI project upgrades by providing automated tools to generate detailed diffs and migration guidance between any React Native versions. Uses rn-diff-purge to help developers seamlessly upgrade or downgrade their projects with step-by-step instructions.

Simple PostgreSQL MCP Server

Simple PostgreSQL MCP Server

Um projeto modelo para construir servidores MCP personalizados que permite acesso direto a bancos de dados PostgreSQL, possibilitando a execução de consultas SQL e a recuperação de informações de esquema através do Protocolo de Contexto de Modelo.

MCP Server for WordPress

MCP Server for WordPress

Implementação de um servidor MCP usando a API REST do WordPress

Google Workspace MCP Server

Google Workspace MCP Server

Enables AI assistants to manage Google Calendar events and Gmail emails through natural language. Supports creating/listing calendar events with smart color coding, sending/reading emails, and advanced search capabilities across both services.

Hacker News

Hacker News

Parsing HTML content from a website like Hacker News and providing structured data in Portuguese requires a multi-step process. Here's a breakdown of the process, along with considerations for translation: **1. Data Extraction (Web Scraping):** * **Libraries:** You'll need a library to fetch the HTML and parse it. Popular choices in Python are: * `requests`: For fetching the HTML content. * `Beautiful Soup 4`: For parsing the HTML and navigating the DOM (Document Object Model). * `lxml`: A faster XML/HTML processing library that can be used with Beautiful Soup. * **Hacker News Structure:** Hacker News has a relatively consistent HTML structure, but it's subject to change. You'll need to inspect the HTML source code of each page (top, new, ask, show, jobs) to identify the relevant HTML elements that contain the data you want to extract. Key elements to look for include: * `<tr>` (table rows): Hacker News uses tables extensively. * `<td>` (table data cells): Data is organized within table cells. * `<a>` (anchor tags): Links to the stories and comments. * `<span>` (span tags): Often used for points, comments, and other metadata. * **Extraction Logic:** You'll write code to: 1. Fetch the HTML content of the desired Hacker News page (e.g., `https://news.ycombinator.com/`). 2. Parse the HTML using Beautiful Soup. 3. Locate the relevant HTML elements based on their tags, classes, or IDs. 4. Extract the data from those elements (e.g., story title, URL, points, number of comments, author, age). 5. Organize the extracted data into a structured format (e.g., a list of dictionaries). **2. Data Structuring:** * **Data Model:** Define a data structure to represent each story. For example: ```python class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type # "top", "new", "ask", "show", "jobs" def __str__(self): return f"Title: {self.title}, URL: {self.url}, Points: {self.points}, Comments: {self.comments}, Author: {self.author}, Age: {self.age}, Type: {self.story_type}" ``` * **Data Cleaning:** Clean the extracted data. This might involve: * Removing extra whitespace. * Converting strings to numbers (e.g., points, comments). * Handling missing data (e.g., stories without URLs). **3. Translation to Portuguese:** * **Translation Libraries/APIs:** Use a translation library or API to translate the relevant text fields (e.g., story title, author, age). Popular options include: * `googletrans`: A free and easy-to-use Python library for Google Translate. (Note: Google Translate's free tier has limitations and may be unreliable for large-scale use.) * `DeepL API`: A paid API that offers high-quality translations. * `Microsoft Translator API`: Another paid API option. * `translate-toolkit`: A more complex but powerful library for localization and translation. * **Translation Logic:** 1. Identify the text fields that need to be translated. 2. Use the chosen translation library/API to translate each field from English to Portuguese. 3. Replace the original English text with the translated Portuguese text in your data structure. **4. Output:** * **Format:** Choose a format for outputting the structured data. Common options include: * JSON: A widely used format for data exchange. * CSV: Suitable for tabular data. * Python dictionaries/lists: If you're using the data within a Python application. * **Display/Storage:** Decide how you want to display or store the translated data. This could involve: * Printing it to the console. * Saving it to a file. * Displaying it in a web application. * Storing it in a database. **Example Code (Python with Beautiful Soup and `googletrans` - for demonstration purposes only, `googletrans` may have issues):** ```python import requests from bs4 import BeautifulSoup from googletrans import Translator # pip install googletrans==4.0.0-rc1 class Story: def __init__(self, title, url, points, comments, author, age, story_type): self.title = title self.url = url self.points = points self.comments = comments self.author = author self.age = age self.story_type = story_type def __str__(self): return f"Título: {self.title}, URL: {self.url}, Pontos: {self.points}, Comentários: {self.comments}, Autor: {self.author}, Idade: {self.age}, Tipo: {self.story_type}" def scrape_hacker_news(url, story_type): try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) soup = BeautifulSoup(response.content, 'html.parser') stories = [] rows = soup.find_all('tr', class_='athing') for row in rows: title_element = row.find('a', class_='storylink') if not title_element: continue # Skip rows without a title link title = title_element.text.strip() url = title_element['href'] # Find the next row containing the story details (points, author, comments) details_row = row.find_next_sibling('tr') if not details_row: continue score_element = details_row.find('span', class_='score') points = int(score_element.text.split()[0]) if score_element else 0 age_element = details_row.find('span', class_='age') age = age_element.text.strip() if age_element else "Unknown" author_element = details_row.find('a', class_='hnuser') author = author_element.text.strip() if author_element else "Unknown" comments_link = details_row.find_all('a')[-1] # Last link is usually the comments link comments_text = comments_link.text.strip() comments = 0 if 'comment' in comments_text: try: comments = int(comments_text.split('\xa0')[0]) #Handles the &nbsp; character except ValueError: comments = 0 story = Story(title, url, points, comments, author, age, story_type) stories.append(story) return stories except requests.exceptions.RequestException as e: print(f"Error fetching URL: {e}") return [] except Exception as e: print(f"Error parsing HTML: {e}") return [] def translate_story(story, translator): try: story.title = translator.translate(story.title, dest='pt').text story.author = translator.translate(story.author, dest='pt').text story.age = translator.translate(story.age, dest='pt').text return story except Exception as e: print(f"Translation error: {e}") return story if __name__ == "__main__": top_stories_url = "https://news.ycombinator.com/" new_stories_url = "https://news.ycombinator.com/newest" # ... other URLs for ask, show, jobs translator = Translator() top_stories = scrape_hacker_news(top_stories_url, "top") new_stories = scrape_hacker_news(new_stories_url, "new") all_stories = top_stories + new_stories for story in all_stories: translated_story = translate_story(story, translator) print(translated_story) ``` **Important Considerations:** * **Website Terms of Service:** Always check the website's terms of service before scraping. Hacker News might have rules against scraping or rate limits. Respect their robots.txt file. * **HTML Structure Changes:** Websites change their HTML structure frequently. Your scraper will need to be updated whenever the Hacker News HTML changes. Use robust selectors that are less likely to break. * **Rate Limiting:** Be mindful of rate limiting. Don't make too many requests in a short period of time, or you might get blocked. Implement delays between requests. * **Error Handling:** Implement robust error handling to catch exceptions and prevent your scraper from crashing. * **Translation Quality:** Machine translation is not perfect. The quality of the translation will depend on the complexity of the text and the capabilities of the translation service. Consider using a professional translator for critical applications. * **`googletrans` Issues:** The `googletrans` library has been known to have issues with reliability and rate limiting. It's often better to use a paid translation API for production environments. * **Character Encoding:** Ensure that you're handling character encoding correctly, especially when dealing with non-ASCII characters. UTF-8 is generally a good choice. * **Asynchronous Requests:** For faster scraping, consider using asynchronous requests with libraries like `aiohttp`. This comprehensive guide provides a solid foundation for building a Hacker News scraper and translator. Remember to adapt the code and techniques to your specific needs and always respect the website's terms of service. Good luck!

Cursor Agent MCP Server

Cursor Agent MCP Server

Enables cost-effective repository analysis, code search, file editing, and task planning by wrapping the cursor-agent CLI through focused tools. Reduces token usage by offloading heavy thinking tasks from Claude to specialized operations with configurable output formats.

Toy MCP Server

Toy MCP Server

A simple MCP server that provides two tools: one for counting specific letter occurrences within words and another for generating UUIDv7s.

MCP Server Demo

MCP Server Demo

MCP Async Server

MCP Async Server

This server implements the Model Context Protocol (MCP) to handle asynchronous tasks with real-time status tracking, robust error handling, and automatic resource management.

Facebook Ads MCP Server by CData

Facebook Ads MCP Server by CData

Facebook Ads MCP Server by CData

Diningcode MCP Server

Diningcode MCP Server

MCP SBOM Server

MCP SBOM Server

Um servidor de Protocolo de Contexto de Modelo que executa verificações do Trivy para gerar Listas de Materiais de Software (SBOM) no formato CycloneDX.

Isaac Sim MCP Extension and MCP Server

Isaac Sim MCP Extension and MCP Server

Extensão MCP e Servidor de Simulação Isaac

Agentic MCP Client

Agentic MCP Client

Um executor de agente autônomo que executa tarefas usando ferramentas MCP (Model Context Protocol) através das APIs Anthropic Claude, AWS BedRock e OpenAI. Ele permite que agentes de IA operem autonomamente em ambientes de nuvem e interajam com vários sistemas de forma segura.

Mcp Server Demo

Mcp Server Demo

simple-mcp-server

simple-mcp-server

Um servidor meteorológico MCP simples escrito em TypeScript.

Statelessor MCP Server

Statelessor MCP Server

Analyzes .NET and Java projects to detect stateful code patterns and provides remediation guidance for migrating to stateless architectures.