Linux Diagnostics MCP Server
This MCP server provides read-only Linux system diagnostics tools for inspecting system information, processes, and log snapshots. It enables AI models to analyze Linux system health, troubleshoot issues, and review security through workflow prompts and HTTP transport with API key authentication.
README
Linux Diagnostics MCP Server - Lecture Demo
A Python/Linux adaptation of the original MCPDemo teaching repository. This repo now reaches Milestone 7 parity for the public teaching flow: compact system inspection, Linux process drill-down, log snapshots as resources, workflow prompts, authenticated MCP over HTTP on /mcp, explicit elicitation before process termination, sampling-assisted Linux diagnostics, and allowed-root proc/sys snapshots.
What This Demo Shows
This lecture demo now includes:
- ✅ Tools: Linux diagnostics tools for
get_system_info,get_process_list,get_process_by_id,get_process_by_name, and elicitation-gatedkill_process - ✅ Resources: paged
syslog://snapshot/...log snapshot resources - ✅ Prompts: MCP workflow prompts for error analysis, CPU investigation, security review, and health diagnosis
- ✅ HTTP transport: streamable MCP over
http://127.0.0.1:5000/mcp - ✅ API key auth:
X-API-Keyheader or?apiKey=secure-mcp-key - ✅ AI Chat Client: a Python Azure OpenAI client that launches the local HTTP server, lets the model call MCP tools, prompts, and resources, and handles local form elicitation in the terminal
- ✅ Python 3.12 implementation with the official MCP Python SDK
- ✅ Multiple testing methods
- ✅ Milestone 5 elicitation for
kill_process - ✅ Milestone 6 sampling-assisted Linux diagnostics
- ✅ Milestone 7 roots for read-only
/procand/syssnapshots
Quick Start
1. Install
Server-only install:
python3 -m pip install --user --break-system-packages -e .
Install the lecture chat client extras:
python3 -m pip install --user --break-system-packages -e '.[llm]'
2. Quick Smoke Test (No LLM)
python3 scripts/smoke_test.py
This script:
- Starts the local HTTP MCP server
- Verifies
401 Unauthorizedwithout an API key - Performs the MCP initialize handshake on
/mcp - Confirms
mcp-session-idflow works across requests - Discovers tools, prompts, and resource templates
- Exercises the system, process, log snapshot, proc snapshot, and sampling-assisted diagnostics flows
- Verifies
kill_processfails safely when the client does not advertise elicitation support - Verifies the lecture chat client fails safely when Azure OpenAI settings are missing
3. Run the Server Manually
python3 -m mcp_linux_diag_server
The server listens on:
- endpoint:
http://127.0.0.1:5000/mcp - demo API key:
secure-mcp-key
4. Test with MCP Inspector or VS Code MCP config
Start the server in one terminal, then connect using the HTTP endpoint above.
This repo includes .vscode/mcp.json with the required header:
{
"servers": {
"linux-diag-demo": {
"url": "http://127.0.0.1:5000/mcp",
"headers": {
"X-API-Key": "secure-mcp-key"
}
}
}
}
If your inspector accepts a URL directly, this query-string form also works:
http://127.0.0.1:5000/mcp?apiKey=secure-mcp-key
5. Use the Lecture Chat Client
Copy the sample environment file and fill in your local Azure OpenAI settings:
cp .env.example .env.local
$EDITOR .env.local
python3 -m mcp_linux_diag_server.client --prompt "Summarize this machine."
To mirror the original .NET credential flow more closely, set:
MCP_DEMO_AZURE_OPENAI_USE_DEFAULT_CREDENTIAL=true
and omit the API key.
Run interactive chat:
python3 -m mcp_linux_diag_server.client
Or run a single prompt:
python3 -m mcp_linux_diag_server.client --prompt "What is the system information?"
The Tools
System Information
get_system_info- Returns a compact Linux or WSL system snapshot- Host name
- Current user
- Linux distribution description
- Kernel release
- Architecture
- Logical CPU count
- Python runtime
- Current working directory
- Uptime
- Load averages
- Memory summary
- WSL detection flag
Process Inspection
get_process_list- Returns a lightweight list of running processes with names and PIDsget_process_by_id- Returns detailed Linux process information for one PIDget_process_by_name- Returns paged detailed process information for a process name- Defaults to
page_number=1 - Defaults to
page_size=5 - Keeps the list-first, detail-second teaching flow from the original demo
- Defaults to
kill_process- Terminates a Linux process only after explicit elicitation- If
process_idis omitted, the server samples the top CPU consumers and asks the client to choose one - The server always requires the typed confirmation phrase
CONFIRM PID {pid} - The lecture client handles these prompts locally in the terminal when stdin/stdout are interactive
- If
troubleshoot_linux_diagnostics- Uses sampling to convert a natural-language Linux diagnostics question into a validated/procor/sysread- The server validates the sampled path and field against an allowlist before reading anything
- Exact Python adaptation: the sampled query is a single safe
PATHorPATH | grep FIELDline instead of WQL - The server then samples again to summarize the observation back to the user
create_proc_snapshot- Creates an immutable read-only snapshot from an allowed/procor/syspath and returns resource URIs- File snapshots page line-by-line content
- Directory snapshots page deterministic child metadata without following symlinks
- Enforces explicit allowed roots before reading anything
request_proc_access- Uses elicitation to request read-only access to an additional/procor/sysroot- Adds the approved root to the server's in-memory allowlist
- Lets the model ask for access proactively before a blocked snapshot attempt
Log Snapshots
create_log_snapshot- Creates an immutable snapshot from a common Linux log file and returns resource URIs- Supports
system,security,kernel, andpackagelog groups - Optional
filter_textnarrows the snapshot to matching lines - Returns a base resource URI plus a paginated resource template
- Supports
Resources
syslog://snapshot/{snapshot_id}- Reads a stored Linux log snapshot with default paginationsyslog://snapshot/{snapshot_id}?limit={limit}&offset={offset}- Reads a specific page from a stored snapshotproc://snapshot/{snapshot_id}- Reads a stored proc/sys snapshot with default paginationproc://snapshot/{snapshot_id}?limit={limit}&offset={offset}- Reads a specific page from a stored proc/sys snapshot
Every resource read returns:
- snapshot metadata
- captured entries
- pagination metadata (
total_count,returned_count,limit,offset,has_more,next_offset)
Prompts
AnalyzeRecentApplicationErrors- Error-focused log analysis workflowExplainHighCpu- Correlate CPU-heavy processes with Linux logsDetectSecurityAnomalies- Review suspicious processes plus auth/security log evidenceDiagnoseSystemHealth- End-to-end system health workflowTroubleshootLinuxComponent- Focused deep-dive workflow that steers the agent towardtroubleshoot_linux_diagnostics
Projects
src/mcp_linux_diag_server/server.py
The authenticated HTTP MCP server exposing the Milestone 1-7 diagnostics tools, resources, and workflow prompts.
src/mcp_linux_diag_server/client.py
The lecture chat client that:
- launches the local HTTP server
- connects over streamable HTTP with the demo API key
- exposes MCP prompt/resource APIs as helper tools for the model
- fulfills MCP form elicitation in the local terminal when the model triggers
kill_process - fulfills MCP sampling requests so the server can synthesize safe Linux diagnostics queries and summaries
- teaches the model to request proc/sys access before snapshotting blocked paths
- executes tool-calling turns
Testing Methods
| Method | Visual | Interactive | LLM | Best For |
|---|---|---|---|---|
python3 scripts/smoke_test.py |
❌ No | ❌ No | ❌ No | quick verification of M1-M7 server behavior |
MCP Inspector / .vscode/mcp.json |
✅ Yes | ✅ Yes | ❌ No | development, debugging, teaching |
python3 -m mcp_linux_diag_server.client |
❌ No | ✅ Yes | ✅ Yes | lecture demo flow |
For the Milestone 1 validation checklist that still underpins the base lecture flow, see M1_VALIDATION_GUIDE.md.
Project Structure
MCPPythonDemo/
├── README.md
├── LICENSE.txt
├── pyproject.toml
├── .env.example
├── .vscode/
│ └── mcp.json
├── scripts/
│ └── smoke_test.py
├── src/
│ └── mcp_linux_diag_server/
│ ├── __main__.py
│ ├── client.py
│ ├── http_config.py
│ ├── server.py
│ └── tools/
│ ├── log_snapshots.py
│ ├── proc_snapshots.py
│ ├── processes.py
│ └── system_info.py
├── tests/
│ ├── http_harness.py
│ ├── test_client.py
│ ├── test_m1_smoke.py
│ ├── test_m2_smoke.py
│ ├── test_m3_smoke.py
│ ├── test_m4_http.py
│ ├── test_log_snapshots.py
│ ├── test_processes.py
│ └── test_system_info.py
Requirements
- Python 3.12+
mcp[cli]- Azure OpenAI only if you want to run the lecture chat client
Milestones
✅ Milestone 1 - Minimal diagnostics tool over stdio plus lecture chat client
✅ Milestone 2 - Process inspection
✅ Milestone 3 - Log snapshot resources and prompts
✅ Milestone 4 - HTTP transport and security
✅ Milestone 5 - Elicitation-backed kill_process
✅ Milestone 6 - Sampling-assisted Linux diagnostics
✅ Milestone 7 - Roots and proc/sys snapshots
License
MIT. See LICENSE.txt.
Resources
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.