Discover Awesome MCP Servers
Extend your agent with 26,962 capabilities via MCP servers.
- All26,962
- Developer Tools3,867
- Search1,714
- Research & Data1,557
- AI Integration Systems229
- Cloud Platforms219
- Data & App Analysis181
- Database Interaction177
- Remote Shell Execution165
- Browser Automation147
- Databases145
- Communication137
- AI Content Generation127
- OS Automation120
- Programming Docs Access109
- Content Fetching108
- Note Taking97
- File Systems96
- Version Control93
- Finance91
- Knowledge & Memory90
- Monitoring79
- Security71
- Image & Video Processing69
- Digital Note Management66
- AI Memory Systems62
- Advanced AI Reasoning59
- Git Management Tools58
- Cloud Storage51
- Entertainment & Media43
- Virtualization42
- Location Services35
- Web Automation & Stealth32
- Media Content Processing32
- Calendar Management26
- Ecommerce & Retail18
- Speech Processing18
- Customer Data Platforms16
- Travel & Transportation14
- Education & Learning Tools13
- Home Automation & IoT13
- Web Search Integration12
- Health & Wellness10
- Customer Support10
- Marketing9
- Games & Gamification8
- Google Cloud Integrations7
- Art & Culture4
- Language Translation3
- Legal & Compliance2
Remote MCP Server on Cloudflare
Bootpay Developer Docs MCP Server
Enables AI coding tools to search and retrieve Bootpay payment and commerce developer documentation, including integration guides and customer service manuals. It facilitates tasks such as payment linking, billing key issuance, and webhook configuration through natural language queries.
Weather MCP Server
An MCP server integrated with the QWeather API that provides real-time weather forecasts and meteorological warnings for AI assistants. It enables users to query current conditions and disaster alerts for specific cities or coordinates.
RescueTime MCP Server
Provides access to RescueTime productivity data including daily summaries, activity tracking, productivity trends, category breakdowns, and hourly analysis to help understand and optimize time usage patterns.
MCP ContentEngineering
Enables direct access to raw Markdown content from files or directories without processing. Perfect for providing AI models with business rules, documentation, or knowledge bases exactly as written.
mcp-gmail
Enables interaction with Gmail through the Gmail API to read unread messages, retrieve email threads, and create draft replies programmatically. Supports OAuth authentication and automated email processing through natural language.
Vehicle Database MCP Server
Enables access to comprehensive vehicle information including VIN decoding, license plate OCR, vehicle history checks (theft, title, salvage records), market valuations, specifications, and warranty data for vehicles across North America and Europe.
Nowledge Mem
Enables Claude to add and search personal memories through the Nowledge Mem service. Allows users to store and retrieve contextual information across conversations.
Superpowers MCP Server
Provides access to the Superpowers skills library - expert-crafted workflows and best practices that guide AI assistants through proven techniques for coding tasks. Supports both community skills and custom personal skills.
UnityInfoMCP
A runtime inspection and automation toolkit that enables MCP clients to interact with live Unity game sessions through a dedicated bridge plugin. It allows users to browse scene hierarchies, inspect component fields, search text elements, and modify game object properties in real-time.
claude-mermaid
MCP Server to previewing mermaid diagrams. https://github.com/veelenga/claude-mermaid/
Prefect MCP Server
Cermin dari
HackerNews MCP Server
Enables AI assistants to access HackerNews content through structured search, front page retrieval, latest posts monitoring, detailed item fetching with comment trees, and user profile viewing via the Algolia API.
dart-query
An MCP server for Dart AI task management that enables bulk operations using DartQL selectors to minimize token usage and context rot. It provides tools for batch updates, task and document CRUD, and safe CSV imports with integrated dry-run capabilities.
Notion MCP Server
Enables AI assistants to interact with Notion workspaces through the Notion API. Supports searching, reading, creating pages, and querying databases with filters and sorting capabilities.
Docker MCP Server
Enables natural language interaction with Docker commands and operations. Supports container management, image operations, system information, and Docker Compose through conversational requests.
Ingrids Reisetjenester
Enables intelligent travel planning by combining weather forecasts and route calculations for destinations. Provides personalized travel recommendations based on weather conditions and supports trip planning with persistent conversation memory.
Task Manager MCP
Enables intelligent task management with status tracking, dependency resolution, and automatic next task discovery based on preconditions and priorities. Supports hierarchical task structures with subtasks and flexible JSON-based configuration.
Search Stock News MCP Server
Provides real-time stock news search capabilities via Tavily API, allowing MCP clients to retrieve filtered and customized stock news with various search parameters.
mcp-servers
Repositori untuk Server Protokol Konteks Model
Investment Memorandum Processor MCP Server
An MCP server that automates extraction of structured financial data from investment memorandums and generates standardized PowerPoint presentations through a RESTful API.
OpenAPITools SDK
API Anda, Sekarang Alat AI. Bangun server mcp dalam satu menit.
Android Project MCP Server
Server Protokol Konteks Model yang memungkinkan pembuatan proyek Android dan menjalankan pengujian langsung di Visual Studio Code melalui ekstensi seperti Cline atau Roo Code.
Gremlin Web Scraper MCP
A lightweight HTTP module that enables scraping visible text from any publicly accessible webpage, integrating directly with VS Code's MCP system.
Rag chatbot with a localhost MCP server
Okay, I understand. You want to build a Retrieval-Augmented Generation (RAG)-based HR chatbot that provides workplace rules, and you're planning to use an MCP (presumably a Media Content Platform or similar) server in the process. Here's a breakdown of the steps and considerations involved in building such a chatbot, along with potential integration points for your MCP server: **1. Understanding the Requirements:** * **Data Source:** You need a comprehensive and well-structured source of workplace rules. This could be: * HR policy documents (PDFs, Word documents, etc.) * Employee handbooks * Internal knowledge base articles * FAQs * Training materials * **User Interface:** How will employees interact with the chatbot? (e.g., a web interface, Slack, Microsoft Teams, etc.) * **Query Types:** What kinds of questions will employees ask? (e.g., "What is the policy on vacation time?", "Can I work remotely?", "What are the rules about using company equipment?") * **Accuracy and Reliability:** The chatbot needs to provide accurate and up-to-date information. This is crucial for HR compliance. * **Scalability:** Can the chatbot handle a large number of users and queries? * **Security:** Protect sensitive HR information. * **Maintenance:** How will you keep the chatbot's knowledge base updated? **2. RAG Architecture Components:** * **Data Ingestion and Preprocessing:** * **Loading:** Load your HR documents into a suitable format. Libraries like `PyPDF2` (for PDFs), `docx2txt` (for Word documents), or dedicated document loaders from libraries like `Langchain` or `LlamaIndex` can help. * **Chunking:** Divide the documents into smaller chunks of text. This is important for efficient retrieval. Consider using semantic chunking to keep related information together. * **Cleaning:** Remove irrelevant information, such as headers, footers, and boilerplate text. * **Embedding Model:** * Choose a suitable embedding model (e.g., Sentence Transformers, OpenAI embeddings, Cohere embeddings). These models convert text into numerical vectors (embeddings) that capture the semantic meaning of the text. * **Vector Database:** * Store the text chunks and their corresponding embeddings in a vector database. Popular choices include: * **ChromaDB:** Easy to set up and use, especially for smaller projects. * **Pinecone:** Scalable and performant, suitable for larger datasets. * **Weaviate:** Graph-based vector database with powerful querying capabilities. * **FAISS (Facebook AI Similarity Search):** A library for efficient similarity search, often used with other databases. * **Retrieval:** * When a user asks a question, the chatbot first converts the question into an embedding using the same embedding model. * Then, it uses the vector database to find the text chunks that are most semantically similar to the question embedding. * **Generation (LLM):** * The retrieved text chunks are passed to a large language model (LLM) like: * **GPT-3.5/GPT-4 (OpenAI):** Powerful but requires an API key and incurs costs. * **Llama 2 (Meta):** Open-source and can be run locally or on a server. * **Mistral AI models:** Open-source and performant. * **Gemini (Google):** Another powerful option. * The LLM uses the retrieved context to generate a relevant and informative answer to the user's question. **3. MCP Server Integration:** Here's how your MCP server could potentially fit into this architecture: * **Document Storage:** Instead of storing HR documents directly in the chatbot's file system, you could store them on your MCP server. The chatbot would then retrieve the documents from the MCP server during the data ingestion phase. This centralizes document management. * **Content Management:** If your MCP server has content management capabilities, you could use it to manage and update the HR rules. The chatbot would then automatically sync with the MCP server to ensure that it has the latest information. * **Media Integration:** If your HR rules include videos, audio, or other media, you can store and manage these assets on your MCP server. The chatbot can then retrieve and display these media assets as part of its responses. For example, if a rule has an explanatory video, the chatbot could provide a link to the video on the MCP server. * **User Authentication and Authorization:** If your MCP server has user authentication and authorization features, you can use it to control access to the chatbot. This ensures that only authorized employees can use the chatbot and access sensitive HR information. * **Logging and Analytics:** Your MCP server could be used to log chatbot usage data, such as the questions that employees are asking and the answers that the chatbot is providing. This data can be used to improve the chatbot's performance and identify areas where the HR rules need to be clarified. **Example Workflow with MCP Server:** 1. **HR updates a rule:** An HR administrator updates a rule on the MCP server. 2. **Chatbot syncs:** The chatbot periodically checks the MCP server for updates. 3. **Data ingestion:** When an update is detected, the chatbot retrieves the updated document from the MCP server, processes it, and updates its vector database. 4. **User asks a question:** An employee asks the chatbot a question about the updated rule. 5. **Retrieval and generation:** The chatbot retrieves the relevant information from its vector database and uses an LLM to generate an answer. 6. **Response:** The chatbot provides the employee with the answer, potentially including links to relevant media assets on the MCP server. **4. Implementation Steps (Simplified):** 1. **Choose your tools:** Select your embedding model, vector database, LLM, and UI framework. 2. **Set up your MCP server:** Ensure that your MCP server is accessible and that you have the necessary credentials to access it. 3. **Implement data ingestion:** Write code to retrieve HR documents from your MCP server, chunk them, and create embeddings. 4. **Store embeddings:** Store the embeddings in your vector database. 5. **Implement retrieval:** Write code to retrieve relevant text chunks from the vector database based on user queries. 6. **Implement generation:** Use an LLM to generate answers based on the retrieved text chunks. 7. **Build the UI:** Create a user interface for the chatbot. 8. **Test and refine:** Thoroughly test the chatbot and refine its performance. 9. **Deploy:** Deploy the chatbot to a production environment. **5. Code Snippet Example (Conceptual - using Langchain and ChromaDB):** ```python from langchain.document_loaders import TextLoader # Or PDFLoader, etc. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.chains import RetrievalQA from langchain.llms import OpenAI # 1. Load documents from MCP server (replace with your actual MCP API call) # Example: Assuming you have a function to fetch the document content def get_document_from_mcp(document_id): # Replace with your actual MCP API call # This is just a placeholder # Example: # response = requests.get(f"https://your-mcp-server/api/documents/{document_id}") # return response.text return "This is a placeholder document from the MCP server." document_content = get_document_from_mcp("hr_policy_123") # Create a dummy file for TextLoader to work with with open("temp_document.txt", "w") as f: f.write(document_content) loader = TextLoader("temp_document.txt") documents = loader.load() # 2. Chunk the documents text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) # 3. Create embeddings embeddings = OpenAIEmbeddings() # Requires OpenAI API key # 4. Store embeddings in ChromaDB db = Chroma.from_documents(texts, embeddings, persist_directory="hr_rules_db") db.persist() # 5. Create a retrieval chain qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=db.as_retriever()) # 6. Ask a question query = "What is the policy on vacation time?" result = qa.run(query) print(result) # Clean up the temporary file import os os.remove("temp_document.txt") ``` **Important Considerations:** * **API Keys:** If you use OpenAI or other paid services, you'll need to manage API keys securely. * **Error Handling:** Implement robust error handling to gracefully handle unexpected situations. * **Regular Updates:** Establish a process for regularly updating the chatbot's knowledge base to ensure that it provides accurate and up-to-date information. * **User Feedback:** Collect user feedback to improve the chatbot's performance and identify areas where it can be enhanced. * **Compliance:** Ensure that the chatbot complies with all relevant HR regulations and privacy laws. This detailed explanation should give you a solid foundation for building your RAG-based HR chatbot with MCP server integration. Remember to adapt the code and architecture to your specific needs and environment. Good luck!
ai-test-gen-poc
An AI-powered test script generator that uses Playwright MCP Server to generate test scripts in TypeScript using BDD-style prompts
Morphik MCP
Enables interaction with the Morphik multi-modal database system for document ingestion, retrieval, querying, and management. Supports text and file ingestion, semantic search with LLM-powered completions, and file system navigation with security controls.
Canvas MCP Server
Connects Canvas LMS to AI assistants, enabling users to list courses and retrieve assignment details through natural language. It features secure multi-institution support with encrypted token storage and a simplified setup process for students.
Jumpseller API MCP Server
An MCP Server that provides access to the Jumpseller e-commerce platform API, allowing users to interact with Jumpseller's functionality through natural language commands.
Schedulia MCP
A meeting scheduling assistant that enables users to view schedules, manage incoming requests, and send meeting invitations via the Schedulia API. It facilitates seamless coordination of meeting times and participant management through natural language commands.