CloudNativePG MCP Server
Enables LLMs to manage PostgreSQL clusters in Kubernetes using the CloudNativePG operator. Supports creating, scaling, monitoring clusters and retrieving detailed status information through natural language commands.
README
CloudNativePG MCP Server
An MCP (Model Context Protocol) server for managing PostgreSQL clusters using the CloudNativePG operator in Kubernetes.
Overview
This MCP server enables LLMs to interact with PostgreSQL clusters managed by the CloudNativePG operator. It provides high-level workflow tools for:
- 📋 Listing and discovering PostgreSQL clusters
- 🔍 Getting detailed cluster status and health information
- 🚀 Creating new PostgreSQL clusters with best practices
- 📈 Scaling clusters up or down
- 🔄 Managing backups and restores (TODO)
- 📊 Monitoring cluster health and logs (TODO)
Prerequisites
-
Kubernetes Cluster with CloudNativePG operator installed:
kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.22/releases/cnpg-1.22.0.yaml -
Python 3.9+ installed
-
kubectl configured to access your cluster
-
Appropriate RBAC permissions for the service account (see RBAC Setup below)
Installation
-
Clone or download this repository
-
Install Python dependencies:
pip install -r requirements.txt -
Verify Kubernetes connectivity:
kubectl get nodes
RBAC Setup
The MCP server needs permissions to interact with CloudNativePG resources. The CloudNativePG helm chart automatically creates ClusterRoles (cnpg-cloudnative-pg-edit, cnpg-cloudnative-pg-view), so you only need to create a ServiceAccount and bind it to these existing roles:
# Apply the RBAC configuration (ServiceAccount + RoleBindings)
kubectl apply -f rbac.yaml
This creates:
- A
cnpg-mcp-serverServiceAccount - ClusterRoleBinding to
cnpg-cloudnative-pg-edit(for managing clusters) - ClusterRoleBinding to
view(for reading pods, events, logs)
Verify the setup:
# Check the service account was created
kubectl get serviceaccount cnpg-mcp-server
# Verify permissions
kubectl auth can-i get clusters.postgresql.cnpg.io --as=system:serviceaccount:default:cnpg-mcp-server
kubectl auth can-i create clusters.postgresql.cnpg.io --as=system:serviceaccount:default:cnpg-mcp-server
For read-only access: Change cnpg-cloudnative-pg-edit to cnpg-cloudnative-pg-view in rbac.yaml
Configuration
Transport Modes
The server supports two transport modes (currently only stdio is implemented):
1. stdio Transport (Default)
Communication over stdin/stdout. Best for local development and Claude Desktop integration.
# Run with default stdio transport
python cnpg_mcp_server.py
# Or explicitly specify stdio
python cnpg_mcp_server.py --transport stdio
Characteristics:
- ✅ Simple setup, no network configuration
- ✅ Automatic process management
- ✅ Secure (no network exposure)
- ❌ Single client per server instance
- ❌ Client and server must be on same machine
Use cases: Claude Desktop, local CLI tools, personal development
2. HTTP/SSE Transport (Future)
HTTP server with Server-Sent Events for remote access. Best for team environments and production deployments.
# Will be available in future version
python cnpg_mcp_server.py --transport http --host 0.0.0.0 --port 3000
When implemented, will provide:
- ✅ Multiple clients can connect
- ✅ Remote access capability
- ✅ Independent server lifecycle
- ✅ Better for team/production use
- ⚠️ Requires authentication/TLS setup
Use cases: Team-shared server, production deployments, Kubernetes services
The codebase is structured to easily add HTTP transport when needed. See the run_http_transport() function for implementation guidelines.
Kubernetes Configuration
The server uses your kubeconfig for authentication:
- Local development: Uses
~/.kube/config - In-cluster: Automatically uses service account tokens
You can also set the KUBECONFIG environment variable:
export KUBECONFIG=/path/to/your/kubeconfig
Running the Server
Command-Line Options
# View all available options
python cnpg_mcp_server.py --help
# Run with stdio transport (default)
python cnpg_mcp_server.py
# Explicitly specify transport mode
python cnpg_mcp_server.py --transport stdio
# Run with HTTP transport (when implemented)
python cnpg_mcp_server.py --transport http --host 0.0.0.0 --port 3000
Standalone Mode (for testing)
python cnpg_mcp_server.py
Note: The server runs as a long-running process waiting for MCP requests. In stdio mode, it won't exit until interrupted. This is expected behavior.
With Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"cloudnative-pg": {
"command": "python",
"args": ["/path/to/cnpg_mcp_server.py"],
"env": {
"KUBECONFIG": "/path/to/.kube/config"
}
}
}
}
With Docker/Kubernetes Deployment
For production deployments, you can containerize the server:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY cnpg_mcp_server.py .
CMD ["python", "cnpg_mcp_server.py"]
Deploy as a Kubernetes service that can be accessed by your LLM application.
Available Tools
1. list_postgres_clusters
List all PostgreSQL clusters in the Kubernetes cluster.
Parameters:
namespace(optional): Filter by namespace, or omit for all namespacesdetail_level: "concise" (default) or "detailed"
Example:
List all PostgreSQL clusters in production namespace
2. get_cluster_status
Get detailed status for a specific cluster.
Parameters:
namespace(required): Namespace of the clustername(required): Name of the clusterdetail_level: "concise" (default) or "detailed"
Example:
Get detailed status for the main-db cluster in production namespace
3. create_postgres_cluster
Create a new PostgreSQL cluster with high availability.
Parameters:
namespace(required): Target namespacename(required): Cluster nameinstances(default: 3): Number of PostgreSQL instancesstorage_size(default: "10Gi"): Storage per instancepostgres_version(default: "16"): PostgreSQL versionstorage_class(optional): Kubernetes storage class
Example:
Create a new PostgreSQL cluster named 'app-db' in the production namespace with 5 instances and 100Gi storage
4. scale_postgres_cluster
Scale a cluster by changing the number of instances.
Parameters:
namespace(required): Namespace of the clustername(required): Cluster nameinstances(required): New number of instances (1-10)
Example:
Scale the app-db cluster in production to 5 instances
Architecture
Design Principles
This MCP server follows agent-centric design principles:
- Workflow-based tools: Each tool completes a meaningful workflow, not just a single API call
- Optimized for context: Responses are concise by default, with detailed mode available
- Actionable errors: Error messages suggest next steps
- Natural naming: Tool names reflect user intent, not just API endpoints
Transport Layer Architecture
The server is designed with transport-agnostic core logic, making it easy to add new transport modes without rewriting tool implementations:
┌─────────────────────────────────────────────┐
│ MCP Tool Layer │
│ (list_clusters, create_cluster, etc.) │
│ ↓ │
│ Core business logic is transport-agnostic │
└─────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Transport Layer │
│ ┌──────────────┐ ┌─────────────┐ │
│ │ stdio │ │ HTTP/SSE │ │
│ │ (current) │ │ (future) │ │
│ └──────────────┘ └─────────────┘ │
└─────────────────────────────────────────────┘
Why this matters:
- All tool functions (decorated with
@mcp.tool()) work with any transport - Adding HTTP transport only requires implementing
run_http_transport() - No changes needed to business logic when switching transports
- Can run both transports simultaneously if needed
To add HTTP/SSE transport later:
- Uncomment HTTP dependencies in
requirements.txt - Install:
pip install mcp[sse] starlette uvicorn - Implement the
run_http_transport()function (skeleton already provided) - Add authentication/authorization middleware
- Configure TLS for production
Components
- Kubernetes Client: Uses
kubernetesPython client for API access - CloudNativePG CRDs: Interacts with Custom Resource Definitions
- Async operations: All I/O is async for better performance
- Error handling: Comprehensive error formatting with suggestions
Development
Adding New Tools
To add a new tool:
- Create a Pydantic model for input validation
- Implement the tool function with
@mcp.tool()decorator - Add comprehensive docstring following the format in existing tools
- Implement error handling with actionable messages
- Test thoroughly
Example skeleton:
class MyToolInput(BaseModel):
"""Input for my_tool."""
param1: str = Field(..., description="Description with examples")
@mcp.tool()
async def my_tool(param1: str) -> str:
"""
Tool description.
Detailed explanation of what this tool does and when to use it.
Args:
param1: Parameter description with usage guidance
Returns:
Description of return value format
Examples:
- Example usage 1
- Example usage 2
Error Handling:
- Common error scenarios and how to resolve them
"""
try:
# Implementation
result = await some_async_operation(param1)
return format_response(result)
except Exception as e:
return format_error_message(e, "context description")
Testing
Run syntax check:
python -m py_compile cnpg_mcp_server.py
Test with a real Kubernetes cluster:
# In one terminal (use tmux to keep it running)
python cnpg_mcp_server.py
# In another terminal, test with MCP client or Claude Desktop
TODO: Upcoming Features
- [ ] Delete cluster tool
- [ ] Backup management (list, create, restore)
- [ ] Log retrieval from pods
- [ ] SQL query execution (with safety guardrails)
- [ ] Database and user management
- [ ] Connection information retrieval
- [ ] Monitoring and metrics integration
- [ ] Certificate and secret management
Troubleshooting
"Permission denied" errors
Ensure your service account has the necessary RBAC permissions. Check:
kubectl auth can-i get clusters.postgresql.cnpg.io --as=system:serviceaccount:default:cnpg-mcp-server
"Connection refused" or "Cluster unreachable"
Verify kubectl connectivity:
kubectl cluster-info
kubectl get nodes
"No module named 'mcp'"
Install dependencies:
pip install -r requirements.txt
Server hangs
This is expected behavior - the server waits for MCP requests over stdio. Run in background or use process manager.
Security Considerations
- RBAC: Apply principle of least privilege - only grant necessary permissions
- Secrets: Never log or expose database credentials
- Input validation: All inputs are validated with Pydantic models
- Namespace isolation: Consider restricting to specific namespaces
- Audit logging: Enable Kubernetes audit logs for compliance
Resources
License
[Your License Here]
Contributing
Contributions are welcome! Please:
- Follow the existing code style
- Add comprehensive docstrings
- Include error handling
- Test with real Kubernetes clusters
- Update README with new features
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.