Uptrace MCP Server
An MCP server for the Uptrace observability platform that enables querying traces, spans, logs, and metrics through natural language. It provides tools for error analysis, service discovery, and trace visualization within MCP-compatible clients.
README
Uptrace MCP Server
Model Context Protocol (MCP) server for Uptrace observability platform. Provides tools for querying traces, spans, and errors through Claude Desktop or other MCP clients.
Features
- š Query error spans - Get detailed error information with traces and stack traces
- š Query spans - Filter and search spans using Uptrace Query Language (UQL)
- š Trace visualization - Get full trace trees with all related spans
- š Aggregations - Group and aggregate spans by services, operations, etc.
- š Query logs - Search and filter logs by severity, service, and custom UQL queries
- š Query metrics - Query metrics using PromQL-compatible syntax
- š·ļø Service discovery - List all services reporting telemetry data
- š Query syntax documentation - Get comprehensive UQL syntax reference
Installation
Prerequisites
- Python 3.10 or higher
- Poetry (recommended) or pip
- Uptrace instance (self-hosted or cloud)
Using Poetry (recommended)
cd uptrace-mcp
poetry install
Using pip
pip install -e .
Configuration
Create a .env file in the project root or set environment variables:
UPTRACE_URL=https://uptrace.xxx
UPTRACE_PROJECT_ID=3
UPTRACE_API_TOKEN=your_token_here
Getting your Uptrace API token
- Log in to your Uptrace instance
- Go to your user profile
- Navigate to "Auth Tokens" section
- Create a new token with read access
Note: User auth tokens do not work with Single Sign-On (SSO). If using SSO, create a separate user account with API access.
Usage
As MCP Server
Cursor IDE
š Detailed setup guide: See CURSOR_SETUP.md for comprehensive instructions.
To add this MCP server to Cursor:
- Open Cursor Settings (Cmd+, on macOS or Ctrl+, on Windows/Linux)
- Search for "MCP" or navigate to Features ā Model Context Protocol
- Click Edit Config or open the MCP configuration file directly
The configuration file location:
- macOS:
~/Library/Application Support/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json - Windows:
%APPDATA%\Cursor\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json - Linux:
~/.config/Cursor/User/globalStorage/saoudrizwan.claude-dev\settings\cline_mcp_settings.json
Quick setup: You can use the example configuration file cursor-mcp-config.json.example as a template. Copy it to your Cursor MCP settings file and update the paths and credentials.
Important: The cwd parameter specifies the working directory where the command will be executed. This must be the root directory of your uptrace-mcp project (where pyproject.toml is located).
Add the following configuration (replace the paths with your actual project paths):
{
"mcpServers": {
"uptrace": {
"command": "/path/to/uptrace-mcp/.venv/bin/poetry",
"args": ["run", "uptrace-mcp"],
"cwd": "/path/to/uptrace-mcp",
"env": {
"UPTRACE_URL": "https://uptrace.xxx",
"UPTRACE_PROJECT_ID": "3",
"UPTRACE_API_TOKEN": "your_token_here"
}
}
}
}
Configuration parameters:
command- Full path to the Poetry executable (or Python interpreter)args- Arguments passed to the command (["run", "uptrace-mcp"]for Poetry)cwd- Working directory - must be the project root directory (wherepyproject.tomlis located)env- Environment variables for the server
Note: If you're using Poetry, make sure to use the full path to the Poetry executable from your virtual environment (.venv/bin/poetry) or the system Poetry installation. Alternatively, you can use the Python interpreter directly:
{
"mcpServers": {
"uptrace": {
"command": "/Users/dimonb/work/pet/uptrace-mcp/.venv/bin/python",
"args": ["-m", "uptrace_mcp.server"],
"cwd": "/Users/dimonb/work/pet/uptrace-mcp",
"env": {
"UPTRACE_URL": "https://uptrace.xxx",
"UPTRACE_PROJECT_ID": "3",
"UPTRACE_API_TOKEN": "your_token_here"
}
}
}
}
After saving the configuration, restart Cursor. The Uptrace tools will be available in the MCP tools panel.
Claude Desktop
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"uptrace": {
"command": "poetry",
"args": ["run", "uptrace-mcp"],
"cwd": "/Users/your-username/work/pet/uptrace-mcp",
"env": {
"UPTRACE_URL": "https://uptrace.xxx",
"UPTRACE_PROJECT_ID": "3",
"UPTRACE_API_TOKEN": "your_token_here"
}
}
}
}
Restart Claude Desktop and the Uptrace tools will be available.
Running Directly
# Using poetry
poetry run uptrace-mcp
# Or if installed with pip
uptrace-mcp
Available Tools
Spans & Traces
uptrace_search_spans
Search spans with custom filters using UQL. Use where _status_code = "error" to find error spans.
Parameters:
time_gte(required): Start time in ISO format (YYYY-MM-DDTHH:MM:SSZ)time_lt(required): End time in ISO format (YYYY-MM-DDTHH:MM:SSZ)query(optional): UQL query stringlimit(optional): Maximum spans to return (default: 100)
Examples:
Search spans where service_name = "aktar" and http_status_code = 404
from 2025-12-08T09:00:00Z to 2025-12-08T10:00:00Z
Find error spans: where _status_code = "error"
from 2025-12-08T09:00:00Z to 2025-12-08T10:00:00Z
uptrace_get_trace
Get all spans for a specific trace ID.
Parameters:
trace_id(required): Trace ID to retrieve
Example:
Get trace with ID 301015e15d95f1ea12af767ebf0ffcca
uptrace_search_groups
Search and aggregate spans by groups.
Parameters:
time_gte(required): Start time in ISO formattime_lt(required): End time in ISO formatquery(required): UQL query with groupinglimit(optional): Maximum groups to return (default: 100)
Example:
Group spans by service_name and count errors
from 2025-12-08T09:00:00Z to 2025-12-08T10:00:00Z
query: "where _status_code = 'error' | group by service_name | count()"
uptrace_search_services
Search for services that have reported spans.
Parameters:
hours(optional): Number of hours to look back (default: 24)
Example:
Search for all services from the last 48 hours
Logs
uptrace_search_logs
Search logs by text, severity, service name, or custom UQL query.
Parameters:
hours(optional): Number of hours to look back (default: 3)search_text(optional): Text to search for in log messages (case-insensitive)severity(optional): Filter by log severity (DEBUG, INFO, WARN, ERROR, FATAL)service_name(optional): Filter by service namequery(optional): Additional UQL query string for advanced filteringlimit(optional): Maximum number of logs to return (default: 100)
Examples:
Search logs containing "error" from the last 3 hours
Search ERROR level logs from service "aktar" in the last 6 hours
Documentation
uptrace_get_query_syntax
Get UQL (Uptrace Query Language) syntax documentation. Returns operators, functions, examples, and common patterns for querying spans, logs, and metrics.
Parameters:
- None
Example:
Get UQL query syntax documentation
Logs
The client provides methods for querying logs (logs are represented as spans with _system = "log:all"):
query_logs()- Query logs with filters by severity, service name, and custom UQL queriesget_error_logs()- Get error logs (ERROR and FATAL severity levels)
Example usage:
from datetime import datetime, timedelta
from uptrace_mcp.client import UptraceClient
client = UptraceClient(
base_url="https://uptrace.xxx",
project_id="3",
api_token="your_token"
)
# Get error logs from the last hour
time_lt = datetime.utcnow()
time_gte = time_lt - timedelta(hours=1)
logs = client.get_error_logs(time_gte=time_gte, time_lt=time_lt, limit=100)
# Query logs with custom filters
logs = client.query_logs(
time_gte=time_gte,
time_lt=time_lt,
severity="ERROR",
service_name="my-service",
query='where log_message contains "database"',
limit=50
)
Metrics
The client provides methods for querying metrics using PromQL-compatible syntax:
query_metrics()- Query metrics with PromQL-compatible formatquery_metrics_groups()- Query and aggregate metrics by groups
Example usage:
# Query metrics
result = client.query_metrics(
time_gte=datetime.utcnow() - timedelta(hours=1),
time_lt=datetime.utcnow(),
metrics=["system_cpu_utilization as $cpu"],
query=["avg($cpu) as cpu_avg"]
)
# Query metrics with grouping
result = client.query_metrics_groups(
time_gte=datetime.utcnow() - timedelta(hours=1),
time_lt=datetime.utcnow(),
metrics=["uptrace_tracing_spans as $spans"],
query=["sum($spans) as total_spans"],
group_by=["service_name"]
)
Additional Span Methods
The client also provides additional convenience methods for working with spans:
get_span_by_id()- Get a specific span by its IDget_spans_by_parent()- Get child spans by parent span IDget_spans_by_system()- Filter spans by system type (http, db, rpc, etc.)get_slow_spans()- Get spans exceeding a duration thresholdget_query_syntax()- Get comprehensive UQL syntax documentation
Example:
# Get query syntax documentation
syntax = client.get_query_syntax()
print(syntax["operators"])
print(syntax["aggregation_functions"])
print(syntax["examples"])
UQL Query Examples
Uptrace uses a SQL-like query language (UQL). You can get comprehensive syntax documentation using client.get_query_syntax(). Here are some examples:
Filter by status
where _status_code = "error"
Filter by service and time
where service_name = "aktar" and _dur_ms > 1000
HTTP errors
where _system = "httpserver" and http_status_code >= 400
Group and aggregate
group by service_name | count() | avg(_dur_ms)
Complex query
where _status_code = "error" and service_name in ("aktar", "gravipay")
| group by service_name, _name
| select service_name, _name, count(), p99(_dur_ms)
Log queries
where _system = "log:all" and log_severity in ("ERROR", "FATAL")
| group by service_name
| select service_name, count()
Metrics queries
metrics:
- system_cpu_utilization as $cpu
query:
- avg($cpu) as cpu_avg
- sum($cpu) by (service_name) as cpu_by_service
Python Client API
The MCP server uses the UptraceClient class internally. You can also use it directly in your Python code:
from datetime import datetime, timedelta
from uptrace_mcp.client import UptraceClient
client = UptraceClient(
base_url="https://uptrace.xxx",
project_id="3",
api_token="your_token"
)
# Query spans
spans = client.get_spans(
time_gte=datetime.utcnow() - timedelta(hours=1),
time_lt=datetime.utcnow(),
query='where _status_code = "error"',
limit=100
)
# Query logs
logs = client.query_logs(
time_gte=datetime.utcnow() - timedelta(hours=1),
time_lt=datetime.utcnow(),
severity="ERROR",
limit=50
)
# Query metrics
metrics = client.query_metrics(
time_gte=datetime.utcnow() - timedelta(hours=1),
time_lt=datetime.utcnow(),
metrics=["uptrace_tracing_spans as $spans"],
query=["sum($spans) as total"]
)
# Get query syntax documentation
syntax = client.get_query_syntax()
See examples/query_errors.py for more examples.
Development
Running tests
poetry run pytest
Code formatting
poetry run black src/
poetry run ruff check src/
Type checking
poetry run mypy src/
Architecture
uptrace-mcp/
āāā src/
ā āāā uptrace_mcp/
ā āāā __init__.py
ā āāā server.py # MCP server with tool handlers
ā āāā client.py # Uptrace API client
ā āāā models.py # Pydantic data models
āāā tests/ # Test suite
āāā pyproject.toml # Poetry configuration
āāā README.md
Troubleshooting
MCP Server Not Found in Cursor
If you see "No server info found" error in Cursor:
-
Verify the configuration file path - Make sure you're editing the correct MCP settings file:
- macOS:
~/Library/Application Support/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json - Windows:
%APPDATA%\Cursor\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json - Linux:
~/.config/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
- macOS:
-
Check the
cwdparameter - This is critical! Thecwdmust point to the project root directory (wherepyproject.tomlis located):"cwd": "/full/path/to/uptrace-mcp"Common error:
Poetry could not find a pyproject.toml filemeanscwdis wrong. -
Check file permissions - Ensure the configuration file is valid JSON and readable
-
Verify Poetry/Python path - Test the command manually:
cd /path/to/uptrace-mcp .venv/bin/poetry run uptrace-mcp --help -
Check environment variables - Make sure all required variables are set in the
envsection:UPTRACE_URLUPTRACE_PROJECT_IDUPTRACE_API_TOKEN
-
Restart Cursor - After making changes, completely restart Cursor (not just reload)
-
Check Cursor logs - Look for error messages in Cursor's developer console or logs
Connection Issues
If you get connection errors:
- Verify
UPTRACE_URLis correct and includes protocol (https://) - Check that
UPTRACE_PROJECT_IDis a valid number - Ensure
UPTRACE_API_TOKENis valid and not expired
Permission Errors
If you get 403 Forbidden errors:
- Verify the token has access to the specified project
- Check if SSO is enabled (requires separate API user account)
No Data Returned
If queries return no data:
- Check the time range is correct (use UTC timezone)
- Verify spans exist in that time period via Uptrace UI
- Try a broader query without filters first
Testing the Server Manually
-
Check configuration:
cd /path/to/uptrace-mcp python check_config.py -
Test server startup:
export UPTRACE_URL="https://uptrace.xxx" export UPTRACE_PROJECT_ID="3" export UPTRACE_API_TOKEN="your_token" .venv/bin/poetry run uptrace-mcpThe server should start without errors. Press Ctrl+C to stop it.
-
Verify MCP protocol: The server communicates via stdio, so you won't see output when run directly. If it starts without errors, it's working correctly.
API Documentation
For more information about Uptrace API and UQL syntax, see:
License
MIT
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.