Discover Awesome MCP Servers

Extend your agent with 24,162 capabilities via MCP servers.

All24,162
MCP Simple Server

MCP Simple Server

Un servidor sencillo que implementa el Protocolo de Contexto del Modelo para la búsqueda de documentos.

OpenProject MCP Server

OpenProject MCP Server

Enables AI assistants to interact with OpenProject installations for comprehensive project management, including creating projects and work packages, managing users and assignments, creating dependencies, and generating Gantt charts through natural language commands.

BlenderMCP

BlenderMCP

Connects Blender to Claude AI, enabling AI-assisted 3D modeling, scene creation, object manipulation, material control, and code execution directly in Blender through natural language prompts.

LumenX-MCP Legal Spend Intelligence Server

LumenX-MCP Legal Spend Intelligence Server

MCP server that enables intelligent analysis of legal spend data across multiple sources (LegalTracker, databases, CSV/Excel files), providing features like spend summaries, vendor performance analysis, and budget comparisons.

mcp-mysql-lens

mcp-mysql-lens

MCP server to connect MySQL DB for read-only queries. It offers accurate query execution.

OpenWRT SSH MCP Server

OpenWRT SSH MCP Server

Enables AI agents to manage OpenWRT routers remotely via SSH, supporting system monitoring, network management, OpenThread Border Router configuration, and package management through natural language commands.

MCP Seekr Server

MCP Seekr Server

Enables web search through Google and Wikipedia plus content extraction from any webpage via the Seekr API. Provides real-time search results with advanced filtering options and clean text extraction capabilities.

WordPress Code Review MCP Server

WordPress Code Review MCP Server

A lightweight, configurable server that fetches coding guidelines, security rules, and validation patterns from external sources to help development teams maintain code quality standards in WordPress projects.

PentestThinkingMCP

PentestThinkingMCP

An AI-powered penetration testing reasoning engine that provides automated attack path planning, step-by-step guidance for CTFs/HTB challenges, and tool recommendations using Beam Search and MCTS algorithms.

Cold Email Assistant

Cold Email Assistant

Automates cold email outreach for job applications by parsing job postings, generating personalized emails using AI, and sending them or saving as drafts in Gmail with resume attachments.

Vercel MCP Server Template

Vercel MCP Server Template

A starter template for deploying Model Context Protocol (MCP) servers on Vercel using TypeScript and Vercel Functions. It includes example tools for rolling dice and checking weather to demonstrate tool integration patterns.

Directmedia MCP

Directmedia MCP

Provides programmatic access to the Directmedia Publishing 'Digitale Bibliothek' collection, a 1990s German electronic book library containing 101 volumes of classic literature and philosophy with text extraction, search, and navigation capabilities.

Sample Model Context Protocol Demos

Sample Model Context Protocol Demos

Okay, here are some examples of how to use the Model Context Protocol with AWS, focusing on common use cases and providing code snippets where appropriate. Keep in mind that the Model Context Protocol is a general framework, and its specific implementation will depend on the particular AWS service you're interacting with and the libraries you're using. **What is the Model Context Protocol?** The Model Context Protocol (MCP) is a standardized way for machine learning models to access contextual information about their environment. This context can include things like: * **Configuration:** Settings, parameters, and hyperparameters. * **Data:** Input data, feature definitions, and data sources. * **Metadata:** Information about the model itself (version, author, training data), the deployment environment, and the request. * **Secrets:** API keys, database credentials, and other sensitive information. * **Logging/Monitoring:** Mechanisms for recording model behavior and performance. The goal is to make models more portable, reproducible, and easier to manage in production. Instead of hardcoding dependencies, models can rely on the context provided by the protocol. **General Approach with AWS** 1. **Choose an MCP Implementation:** There isn't a single, universally adopted MCP library. You might need to create your own or adapt an existing one. The key is to define a consistent interface for accessing context. 2. **Populate the Context:** This is where you integrate with AWS services. You'll fetch configuration, secrets, data, etc., from AWS and store them in the context object. 3. **Pass the Context to Your Model:** When you load or run your model, pass the context object as an argument. 4. **Access Context within the Model:** The model uses the MCP interface to retrieve the information it needs. **Examples** **1. Configuration Management with AWS AppConfig** * **Use Case:** Dynamically update model parameters without redeploying. * **AWS Services:** AWS AppConfig, AWS Lambda (for serving the model). * **MCP Implementation (Conceptual):** ```python class ModelContext: def __init__(self): self.config = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value ``` * **Populating the Context (in Lambda):** ```python import boto3 import json def get_appconfig_data(application_id, environment_id, configuration_profile_id): client = boto3.client('appconfigdata') response = client.get_latest_configuration( ConfigurationToken='1', # Initial token, will be updated ConfigurationProfileId=configuration_profile_id, EnvironmentId=environment_id, ApplicationId=application_id ) config_data = json.loads(response['Content'].read().decode('utf-8')) return config_data def lambda_handler(event, context): # Replace with your AppConfig IDs application_id = 'your-application-id' environment_id = 'your-environment-id' configuration_profile_id = 'your-configuration-profile-id' model_context = ModelContext() config = get_appconfig_data(application_id, environment_id, configuration_profile_id) model_context.config = config # Store the entire config in the context # Load and run your model, passing the context model = load_model(model_context) # Assuming load_model takes the context prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Accessing Context in the Model:** ```python def load_model(model_context): # Access configuration from the context learning_rate = model_context.get_config('learning_rate') model = MyModel(learning_rate=learning_rate) return model class MyModel: def __init__(self, learning_rate): self.learning_rate = learning_rate def predict(self, data): # Model logic here return data * self.learning_rate ``` **2. Secret Management with AWS Secrets Manager** * **Use Case:** Securely access database credentials or API keys. * **AWS Services:** AWS Secrets Manager, AWS Lambda. * **MCP Implementation (Conceptual):** (Same `ModelContext` class as above, but we'll add a `get_secret` method) ```python class ModelContext: def __init__(self): self.config = {} self.secrets = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value def get_secret(self, secret_name): return self.secrets.get(secret_name) def set_secret(self, secret_name, secret_value): self.secrets[secret_name] = secret_value ``` * **Populating the Context (in Lambda):** ```python import boto3 import json def get_secret(secret_name): client = boto3.client('secretsmanager') response = client.get_secret_value(SecretId=secret_name) secret_string = response['SecretString'] secret_data = json.loads(secret_string) return secret_data def lambda_handler(event, context): # Replace with your Secret Name secret_name = 'your-database-credentials' model_context = ModelContext() secrets = get_secret(secret_name) model_context.secrets = {secret_name: secrets} # Store the secrets in the context # Load and run your model, passing the context model = load_model(model_context) prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Accessing Context in the Model:** ```python def load_model(model_context): # Access database credentials from the context db_credentials = model_context.get_secret('your-database-credentials') db_host = db_credentials['host'] db_user = db_credentials['username'] db_password = db_credentials['password'] # Connect to the database using the credentials # ... (database connection code) ... model = MyModel(db_host, db_user, db_password) return model class MyModel: def __init__(self, db_host, db_user, db_password): self.db_host = db_host self.db_user = db_user self.db_password = db_password def predict(self, data): # Model logic here, potentially using the database connection return data * 2 ``` **3. Data Access with Amazon S3** * **Use Case:** Load model data or feature definitions from S3. * **AWS Services:** Amazon S3, AWS Lambda. * **MCP Implementation (Conceptual):** (Adding a `get_data` method) ```python class ModelContext: def __init__(self): self.config = {} self.secrets = {} self.data = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value def get_secret(self, secret_name): return self.secrets.get(secret_name) def set_secret(self, secret_name, secret_value): self.secrets[secret_name] = secret_value def get_data(self, data_name): return self.data.get(data_name) def set_data(self, data_name, data_value): self.data[data_name] = data_value ``` * **Populating the Context (in Lambda):** ```python import boto3 import json def get_s3_data(bucket_name, key): s3 = boto3.client('s3') response = s3.get_object(Bucket=bucket_name, Key=key) data = response['Body'].read().decode('utf-8') return data def lambda_handler(event, context): # Replace with your S3 bucket and key bucket_name = 'your-s3-bucket' feature_definitions_key = 'feature_definitions.json' model_context = ModelContext() feature_definitions = get_s3_data(bucket_name, feature_definitions_key) model_context.data['feature_definitions'] = json.loads(feature_definitions) # Store the data in the context # Load and run your model, passing the context model = load_model(model_context) prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Accessing Context in the Model:** ```python def load_model(model_context): # Access feature definitions from the context feature_definitions = model_context.get_data('feature_definitions') model = MyModel(feature_definitions) return model class MyModel: def __init__(self, feature_definitions): self.feature_definitions = feature_definitions def predict(self, data): # Model logic here, using the feature definitions # Example: processed_data = {} for feature_name, definition in self.feature_definitions.items(): if feature_name in data: processed_data[feature_name] = data[feature_name] * definition['scaling_factor'] return processed_data ``` **4. Model Metadata with AWS SageMaker Model Registry** * **Use Case:** Access model version, training parameters, and other metadata stored in the SageMaker Model Registry. * **AWS Services:** Amazon SageMaker, SageMaker Model Registry, AWS Lambda. * **MCP Implementation (Conceptual):** (Adding a `get_metadata` method) ```python class ModelContext: def __init__(self): self.config = {} self.secrets = {} self.data = {} self.metadata = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value def get_secret(self, secret_name): return self.secrets.get(secret_name) def set_secret(self, secret_name, secret_value): self.secrets[secret_name] = secret_value def get_data(self, data_name): return self.data.get(data_name) def set_data(self, data_name, data_value): self.data[data_name] = data_value def get_metadata(self, metadata_key): return self.metadata.get(metadata_key) def set_metadata(self, metadata_key, metadata_value): self.metadata[metadata_key] = metadata_value ``` * **Populating the Context (in Lambda):** ```python import boto3 import json def get_model_metadata(model_package_group_name, model_package_version): sagemaker = boto3.client('sagemaker') response = sagemaker.describe_model_package( ModelPackageGroupName=model_package_group_name, ModelPackageVersion=model_package_version ) return response def lambda_handler(event, context): # Replace with your Model Package Group Name and Version model_package_group_name = 'your-model-package-group' model_package_version = 1 model_context = ModelContext() metadata = get_model_metadata(model_package_group_name, model_package_version) model_context.metadata['model_info'] = metadata # Store the metadata in the context # Load and run your model, passing the context model = load_model(model_context) prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Accessing Context in the Model:** ```python def load_model(model_context): # Access model metadata from the context model_info = model_context.get_metadata('model_info') training_job_name = model_info['TrainingJobDetails'][0]['TrainingJobName'] model_version = model_info['ModelPackageVersion'] model = MyModel(training_job_name, model_version) return model class MyModel: def __init__(self, training_job_name, model_version): self.training_job_name = training_job_name self.model_version = model_version def predict(self, data): # Model logic here, potentially logging the model version print(f"Using model version: {self.model_version}") return data * 3 ``` **Key Considerations:** * **Error Handling:** Implement robust error handling when fetching data from AWS services. Handle cases where secrets are missing, configurations are invalid, or S3 objects are not found. * **Caching:** Cache data retrieved from AWS services to improve performance and reduce costs. Consider using a caching library like `functools.lru_cache` or a dedicated caching service like Amazon ElastiCache. Be mindful of cache invalidation. * **Security:** Use IAM roles with least-privilege access to grant your Lambda functions or other services only the permissions they need to access AWS resources. * **Testing:** Write unit tests to verify that your MCP implementation is working correctly and that your model can access the context as expected. * **Serialization:** If you need to pass the context between different processes or services, you'll need to serialize it. Consider using JSON or a more efficient serialization format like Protocol Buffers. * **Framework Integration:** If you're using a machine learning framework like TensorFlow or PyTorch, look for ways to integrate the MCP into the framework's model loading and serving mechanisms. Some frameworks may have built-in support for configuration management or secret management. * **Customization:** The `ModelContext` class provided in these examples is a basic starting point. You'll likely need to customize it to meet the specific requirements of your models and your AWS environment. **Translation to Spanish:** Aquí hay algunos ejemplos de cómo usar el Protocolo de Contexto del Modelo (Model Context Protocol - MCP) con AWS, centrándose en casos de uso comunes y proporcionando fragmentos de código cuando sea apropiado. Tenga en cuenta que el Protocolo de Contexto del Modelo es un marco general, y su implementación específica dependerá del servicio de AWS particular con el que esté interactuando y de las bibliotecas que esté utilizando. **¿Qué es el Protocolo de Contexto del Modelo?** El Protocolo de Contexto del Modelo (MCP) es una forma estandarizada para que los modelos de aprendizaje automático accedan a información contextual sobre su entorno. Este contexto puede incluir cosas como: * **Configuración:** Ajustes, parámetros e hiperparámetros. * **Datos:** Datos de entrada, definiciones de características y fuentes de datos. * **Metadatos:** Información sobre el modelo en sí (versión, autor, datos de entrenamiento), el entorno de implementación y la solicitud. * **Secretos:** Claves API, credenciales de bases de datos y otra información confidencial. * **Registro/Monitoreo:** Mecanismos para registrar el comportamiento y el rendimiento del modelo. El objetivo es hacer que los modelos sean más portátiles, reproducibles y fáciles de administrar en producción. En lugar de codificar dependencias, los modelos pueden confiar en el contexto proporcionado por el protocolo. **Enfoque General con AWS** 1. **Elija una Implementación de MCP:** No existe una única biblioteca de MCP universalmente adoptada. Es posible que deba crear la suya propia o adaptar una existente. La clave es definir una interfaz consistente para acceder al contexto. 2. **Poblar el Contexto:** Aquí es donde se integra con los servicios de AWS. Obtendrá la configuración, los secretos, los datos, etc., de AWS y los almacenará en el objeto de contexto. 3. **Pasar el Contexto a su Modelo:** Cuando cargue o ejecute su modelo, pase el objeto de contexto como un argumento. 4. **Acceder al Contexto dentro del Modelo:** El modelo utiliza la interfaz MCP para recuperar la información que necesita. **Ejemplos** **1. Gestión de la Configuración con AWS AppConfig** * **Caso de Uso:** Actualizar dinámicamente los parámetros del modelo sin volver a implementarlo. * **Servicios de AWS:** AWS AppConfig, AWS Lambda (para servir el modelo). * **Implementación de MCP (Conceptual):** ```python class ModelContext: def __init__(self): self.config = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value ``` * **Poblar el Contexto (en Lambda):** ```python import boto3 import json def get_appconfig_data(application_id, environment_id, configuration_profile_id): client = boto3.client('appconfigdata') response = client.get_latest_configuration( ConfigurationToken='1', # Token inicial, se actualizará ConfigurationProfileId=configuration_profile_id, EnvironmentId=environment_id, ApplicationId=application_id ) config_data = json.loads(response['Content'].read().decode('utf-8')) return config_data def lambda_handler(event, context): # Reemplace con sus IDs de AppConfig application_id = 'su-application-id' environment_id = 'su-environment-id' configuration_profile_id = 'su-configuration-profile-id' model_context = ModelContext() config = get_appconfig_data(application_id, environment_id, configuration_profile_id) model_context.config = config # Almacene toda la configuración en el contexto # Cargue y ejecute su modelo, pasando el contexto model = load_model(model_context) # Asumiendo que load_model toma el contexto prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Acceder al Contexto en el Modelo:** ```python def load_model(model_context): # Acceda a la configuración desde el contexto learning_rate = model_context.get_config('learning_rate') model = MyModel(learning_rate=learning_rate) return model class MyModel: def __init__(self, learning_rate): self.learning_rate = learning_rate def predict(self, data): # Lógica del modelo aquí return data * self.learning_rate ``` **2. Gestión de Secretos con AWS Secrets Manager** * **Caso de Uso:** Acceder de forma segura a las credenciales de la base de datos o las claves API. * **Servicios de AWS:** AWS Secrets Manager, AWS Lambda. * **Implementación de MCP (Conceptual):** (La misma clase `ModelContext` que antes, pero agregaremos un método `get_secret`) ```python class ModelContext: def __init__(self): self.config = {} self.secrets = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value def get_secret(self, secret_name): return self.secrets.get(secret_name) def set_secret(self, secret_name, secret_value): self.secrets[secret_name] = secret_value ``` * **Poblar el Contexto (en Lambda):** ```python import boto3 import json def get_secret(secret_name): client = boto3.client('secretsmanager') response = client.get_secret_value(SecretId=secret_name) secret_string = response['SecretString'] secret_data = json.loads(secret_string) return secret_data def lambda_handler(event, context): # Reemplace con su Nombre de Secreto secret_name = 'sus-credenciales-de-base-de-datos' model_context = ModelContext() secrets = get_secret(secret_name) model_context.secrets = {secret_name: secrets} # Almacene los secretos en el contexto # Cargue y ejecute su modelo, pasando el contexto model = load_model(model_context) prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Acceder al Contexto en el Modelo:** ```python def load_model(model_context): # Acceda a las credenciales de la base de datos desde el contexto db_credentials = model_context.get_secret('sus-credenciales-de-base-de-datos') db_host = db_credentials['host'] db_user = db_credentials['username'] db_password = db_credentials['password'] # Conéctese a la base de datos utilizando las credenciales # ... (código de conexión a la base de datos) ... model = MyModel(db_host, db_user, db_password) return model class MyModel: def __init__(self, db_host, db_user, db_password): self.db_host = db_host self.db_user = db_user self.db_password = db_password def predict(self, data): # Lógica del modelo aquí, potencialmente utilizando la conexión a la base de datos return data * 2 ``` **3. Acceso a Datos con Amazon S3** * **Caso de Uso:** Cargar datos del modelo o definiciones de características desde S3. * **Servicios de AWS:** Amazon S3, AWS Lambda. * **Implementación de MCP (Conceptual):** (Agregando un método `get_data`) ```python class ModelContext: def __init__(self): self.config = {} self.secrets = {} self.data = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value def get_secret(self, secret_name): return self.secrets.get(secret_name) def set_secret(self, secret_name, secret_value): self.secrets[secret_name] = secret_value def get_data(self, data_name): return self.data.get(data_name) def set_data(self, data_name, data_value): self.data[data_name] = data_value ``` * **Poblar el Contexto (en Lambda):** ```python import boto3 import json def get_s3_data(bucket_name, key): s3 = boto3.client('s3') response = s3.get_object(Bucket=bucket_name, Key=key) data = response['Body'].read().decode('utf-8') return data def lambda_handler(event, context): # Reemplace con su bucket y clave de S3 bucket_name = 'su-bucket-de-s3' feature_definitions_key = 'feature_definitions.json' model_context = ModelContext() feature_definitions = get_s3_data(bucket_name, feature_definitions_key) model_context.data['feature_definitions'] = json.loads(feature_definitions) # Almacene los datos en el contexto # Cargue y ejecute su modelo, pasando el contexto model = load_model(model_context) prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Acceder al Contexto en el Modelo:** ```python def load_model(model_context): # Acceda a las definiciones de características desde el contexto feature_definitions = model_context.get_data('feature_definitions') model = MyModel(feature_definitions) return model class MyModel: def __init__(self, feature_definitions): self.feature_definitions = feature_definitions def predict(self, data): # Lógica del modelo aquí, utilizando las definiciones de características # Ejemplo: processed_data = {} for feature_name, definition in self.feature_definitions.items(): if feature_name in data: processed_data[feature_name] = data[feature_name] * definition['scaling_factor'] return processed_data ``` **4. Metadatos del Modelo con AWS SageMaker Model Registry** * **Caso de Uso:** Acceder a la versión del modelo, los parámetros de entrenamiento y otros metadatos almacenados en el SageMaker Model Registry. * **Servicios de AWS:** Amazon SageMaker, SageMaker Model Registry, AWS Lambda. * **Implementación de MCP (Conceptual):** (Agregando un método `get_metadata`) ```python class ModelContext: def __init__(self): self.config = {} self.secrets = {} self.data = {} self.metadata = {} def get_config(self, key): return self.config.get(key) def set_config(self, key, value): self.config[key] = value def get_secret(self, secret_name): return self.secrets.get(secret_name) def set_secret(self, secret_name, secret_value): self.secrets[secret_name] = secret_value def get_data(self, data_name): return self.data.get(data_name) def set_data(self, data_name, data_value): self.data[data_name] = data_value def get_metadata(self, metadata_key): return self.metadata.get(metadata_key) def set_metadata(self, metadata_key, metadata_value): self.metadata[metadata_key] = metadata_value ``` * **Poblar el Contexto (en Lambda):** ```python import boto3 import json def get_model_metadata(model_package_group_name, model_package_version): sagemaker = boto3.client('sagemaker') response = sagemaker.describe_model_package( ModelPackageGroupName=model_package_group_name, ModelPackageVersion=model_package_version ) return response def lambda_handler(event, context): # Reemplace con su Nombre de Grupo de Paquetes de Modelos y Versión model_package_group_name = 'su-grupo-de-paquetes-de-modelos' model_package_version = 1 model_context = ModelContext() metadata = get_model_metadata(model_package_group_name, model_package_version) model_context.metadata['model_info'] = metadata # Almacene los metadatos en el contexto # Cargue y ejecute su modelo, pasando el contexto model = load_model(model_context) prediction = model.predict(event['input_data']) return { 'statusCode': 200, 'body': json.dumps(prediction) } ``` * **Acceder al Contexto en el Modelo:** ```python def load_model(model_context): # Acceda a los metadatos del modelo desde el contexto model_info = model_context.get_metadata('model_info') training_job_name = model_info['TrainingJobDetails'][0]['TrainingJobName'] model_version = model_info['ModelPackageVersion'] model = MyModel(training_job_name, model_version) return model class MyModel: def __init__(self, training_job_name, model_version): self.training_job_name = training_job_name self.model_version = model_version def predict(self, data): # Lógica del modelo aquí, potencialmente registrando la versión del modelo print(f"Usando la versión del modelo: {self.model_version}") return data * 3 ``` **Consideraciones Clave:** * **Manejo de Errores:** Implemente un manejo de errores robusto al obtener datos de los servicios de AWS. Maneje los casos en que faltan secretos, las configuraciones no son válidas o los objetos S3 no se encuentran. * **Almacenamiento en Caché:** Almacene en caché los datos recuperados de los servicios de AWS para mejorar el rendimiento y reducir los costos. Considere usar una biblioteca de almacenamiento en caché como `functools.lru_cache` o un servicio de almacenamiento en caché dedicado como Amazon ElastiCache. Tenga en cuenta la invalidación de la caché. * **Seguridad:** Utilice roles de IAM con acceso de mínimo privilegio para otorgar a sus funciones Lambda u otros servicios solo los permisos que necesitan para acceder a los recursos de AWS. * **Pruebas:** Escriba pruebas unitarias para verificar que su implementación de MCP esté funcionando correctamente y que su modelo pueda acceder al contexto como se espera. * **Serialización:** Si necesita pasar el contexto entre diferentes procesos o servicios, deberá serializarlo. Considere usar JSON o un formato de serialización más eficiente como Protocol Buffers. * **Integración del Framework:** Si está utilizando un framework de aprendizaje automático como TensorFlow o PyTorch, busque formas de integrar el MCP en los mecanismos de carga y servicio de modelos del framework. Algunos frameworks pueden tener soporte integrado para la gestión de la configuración o la gestión de secretos. * **Personalización:** La clase `ModelContext` proporcionada en estos ejemplos es un punto de partida básico. Es probable que deba personalizarla para satisfacer los requisitos específicos de sus modelos y su entorno de AWS. I hope this helps! Let me know if you have any other questions.

Python Mcp Server Sample

Python Mcp Server Sample

Cab Service MCP Server

Cab Service MCP Server

Enables cab booking and management through natural conversation, allowing users to book rides, cancel bookings, and view driver details. Works with Google Maps MCP to provide comprehensive travel planning with automatic route optimization and cab arrangements between destinations.

Toy MCP Server

Toy MCP Server

A simple reference implementation demonstrating MCP server basics with two toy tools: generating random animals and simulating 20-sided die rolls.

kintone MCP Server (Python3)

kintone MCP Server (Python3)

Enables AI assistants to interact with kintone data by providing comprehensive tools for record CRUD operations, file management, and workflow status updates. It supports secure authentication and automatic pagination to handle large datasets efficiently through the Model Context Protocol.

MCP Adapter

MCP Adapter

Automatically converts OpenAPI specifications into Model Context Protocol applications, enabling HTTP APIs to be managed as MCP services. It features a dynamic architecture that monitors file systems or Kubernetes ConfigMaps to update MCP tools in real-time.

Perplexity MCP Server

Perplexity MCP Server

Integrates Perplexity AI's search-enhanced language models with Claude Desktop, providing three tools with different complexity levels for quick fact-checking, technical analysis, and deep research.

rdb-mcp-server

rdb-mcp-server

TWSE MCP Server

TWSE MCP Server

台灣證交所MCPServer

OpenFeature MCP Server

OpenFeature MCP Server

Provides OpenFeature SDK installation guidance for various programming languages and enables feature flag evaluation through the OpenFeature Remote Evaluation Protocol (OFREP). Supports multiple AI clients and can connect to any OFREP-compatible feature flag service.

天气 MCP 服务器

天气 MCP 服务器

Aquí tienes una traducción: "Este es un servidor MCP de consulta meteorológica construido con FastMCP."

Developer Research MCP Server

Developer Research MCP Server

Provides structured web search capabilities optimized for technical and software development content via providers like OpenRouter. It enables AI agents to perform research and retrieve relevant technical data in a consistent, programmatic JSON format.

mcp-kubernetes-ro

mcp-kubernetes-ro

Provides read-only access to Kubernetes clusters for AI assistants.

SuperCollider MCP Server

SuperCollider MCP Server

Enables AI assistants to generate and control real-time audio synthesis through natural language descriptions using SuperCollider. Features 10 built-in synth types, pattern sequencing, audio recording, and server lifecycle management for creating sounds from simple English descriptions.

GameMaker Documentation MCP Server

GameMaker Documentation MCP Server

Provides programmatic access to GameMaker Language (GML) documentation through MCP tools for function lookup, documentation search, and comprehensive coding guidance. Includes built-in GameMaker documentation with no additional setup required.

Google Calendar MCP Server by CData

Google Calendar MCP Server by CData

Google Calendar MCP Server by CData

Sassy Fact Check Bot

Sassy Fact Check Bot

Generates witty, citation-backed responses to health myths and misinformation with automatic tone adjustment for sensitive topics. Integrates with Instagram DMs to fact-check viral claims with sass and sources.

Databricks MCP Server App

Databricks MCP Server App

Deploys the Databricks AI Dev Kit MCP server as a Databricks App, exposing over 80 tools for interacting with workspace services like SQL warehouses, Unity Catalog, and AI/BI dashboards. It enables users to manage and query Databricks resources via natural language in the AI Playground using a Streamable HTTP transport.