MCP System Monitor Server
Enables real-time monitoring of system resources including CPU, GPU (NVIDIA, Apple Silicon, AMD/Intel), memory, disk, network, and processes across Windows, macOS, and Linux platforms through natural language queries.
README
MCP System Monitor Server
A cross-platform MCP (Model Context Protocol) server that provides comprehensive real-time system monitoring capabilities for LLMs. Built with FastMCP for easy integration with Claude Desktop and other MCP-compatible clients.
Features
System Monitoring
Basic System Monitoring:
- CPU Monitoring: Real-time usage, per-core statistics, frequency, temperature, detailed processor information (model, vendor, architecture, cache sizes)
- GPU Monitoring: Multi-vendor GPU support (NVIDIA with full metrics, Apple Silicon with comprehensive support including unified memory and core count, AMD/Intel with basic info)
- Memory Monitoring: RAM and swap usage, availability statistics
- Disk Monitoring: Space usage, filesystem information for all mounted drives
- Network Statistics: Interface-level traffic and error counters
- Process Monitoring: Top processes by CPU/memory usage
- System Information: OS details, hostname, uptime, architecture
Phase 1 Performance Monitoring:
- I/O Performance: Detailed disk I/O metrics, read/write rates, per-disk statistics, busy time analysis
- System Load: Load averages (1m, 5m, 15m), context switches, interrupts, running/blocked processes
- Enhanced Memory: Detailed memory statistics including buffers, cache, active/inactive memory, page faults, swap activity
- Enhanced Network: Network performance metrics with transfer rates, errors, drops, interface speed and MTU
MCP Tools Available
Basic System Monitoring (9 tools):
get_current_datetime: Get the current local datetime in ISO formatget_cpu_info: Get current CPU usage and statisticsget_gpu_info: Get GPU information for all detected GPUsget_memory_info: Get RAM and swap usageget_disk_info: Get disk usage for all mounted drivesget_system_snapshot: Get complete system state in one callmonitor_cpu_usage: Monitor CPU usage over a specified durationget_top_processes: Get top processes by CPU or memory usageget_network_stats: Get network interface statistics
Phase 1 Performance Monitoring (6 tools):
get_io_performance: Get detailed I/O performance metrics and ratesget_system_load: Get system load averages and process statisticsget_enhanced_memory_info: Get detailed memory statistics with caches/buffersget_enhanced_network_stats: Get enhanced network performance metricsget_performance_snapshot: Get complete performance monitoring snapshotmonitor_io_performance: Monitor I/O performance over specified duration with trend analysis
MCP Resources
Basic System Resources (3 resources):
system://live/cpu: Live CPU usage datasystem://live/memory: Live memory usage datasystem://config: System configuration and hardware information
Phase 1 Performance Resources (3 resources):
system://performance/io: Live I/O performance datasystem://performance/load: Live system load datasystem://performance/network: Live network performance data
GPU Support Details
NVIDIA GPUs:
- Full metrics: usage percentage, memory (used/total), temperature, power consumption
- Supports multiple NVIDIA GPUs
- Requires NVIDIA drivers and NVML libraries
Apple Silicon GPUs:
- Comprehensive support for M1, M2, and M3 chips
- GPU core count detection
- Unified memory reporting (shares system RAM)
- Metal API support detection
- Temperature monitoring (when available)
AMD/Intel GPUs:
- Basic detection and identification
- Limited metrics depending on platform and drivers
Requirements
- Python 3.10+
- Windows, macOS, or Linux
- GPU (optional): NVIDIA GPUs for full metrics, Apple Silicon GPUs fully supported on macOS
Installation
From GitHub
-
Clone the repository:
git clone https://github.com/huhabla/mcp-system-monitor.git cd mcp-system-monitor -
Install dependencies using uv (recommended):
uv pip install -e .Or using pip:
pip install -e .
Optional Dependencies
For Windows-specific features:
pip install mcp-system-monitor[win32]
Usage
Development Mode
Test the server with the MCP Inspector:
uv run mcp dev mcp_system_monitor_server.py
Claude Desktop Integration
Install the server in Claude Desktop:
uv run mcp install mcp_system_monitor_server.py --name "System Monitor"
Direct Execution
Run the server directly:
python mcp_system_monitor_server.py
MCP Servers Json Config
Modify the following JSON template to set the path to the MCP server in your MCP client for Windows:
{
"mcpServers": {
"mpc-system-monitor": {
"command": "cmd",
"args": [
"/c",
"C:/Users/Sören Gebbert/Documents/GitHub/mcp-system-monitor/start_mpc_system_monitor.bat"
]
}
}
}
Modify the following JSON template to set the path to the MCP server in your MCP client for MacOS:
{
"mcpServers": {
"mpc-system-monitor": {
"command": "/bin/zsh",
"args": [
"/Users/holistech/Documents/GitHub/mcp-system-monitor/start_mcp_system_monitor.sh"
]
}
}
}
Example Tool Usage
Once connected to Claude Desktop or another MCP client, you can use natural language to interact with the system monitor:
Basic System Monitoring:
- "Show me the current CPU usage"
- "What's my GPU temperature?"
- "How many GPU cores does my Apple M1 Max have?"
- "Show me GPU memory usage and whether it's unified memory"
- "How much disk space is available?"
- "Monitor CPU usage for the next 10 seconds"
- "Show me the top 5 processes by memory usage"
- "Get a complete system snapshot"
Phase 1 Performance Monitoring:
- "Show me detailed I/O performance metrics"
- "What's the current system load average?"
- "Monitor I/O performance for the next 30 seconds"
- "Show me enhanced memory statistics with cache information"
- "Get detailed network performance metrics"
- "Give me a complete performance snapshot"
Architecture
The server uses a modular collector-based architecture:
- BaseCollector: Abstract base class providing caching and async data collection
- Specialized Collectors: CPU, GPU, Memory, Disk, Network, Process, and System collectors
- Phase 1 Performance Collectors: IOPerformance, SystemLoad, EnhancedMemory, and EnhancedNetwork collectors
- Pydantic Models: Type-safe data models for all system information
- FastMCP Integration: Simple decorators for exposing tools and resources
Caching Strategy
All collectors implement intelligent caching to:
- Reduce system overhead from frequent polling
- Provide consistent data within time windows
- Allow configurable cache expiration
Testing
Comprehensive Test Suite
The project includes a comprehensive test suite with 100% coverage of all MCP tools, resources, and collectors:
Test Organization:
test_mcp_system_monitor_server.py- Original basic collector teststest_mcp_system_monitor_server_comprehensive.py- Comprehensive MCP tools/resources teststest_mcp_server_integration.py- Integration tests for MCP server protocol compliancetest_architecture_agnostic.py- Cross-platform tests focusing on data contractsconftest.py- Test configuration, fixtures, and mocking utilities
Running Tests
Run all tests:
pytest
Run tests by category:
pytest -m unit # Fast unit tests only
pytest -m integration # Integration tests only
pytest -m agnostic # Architecture/OS agnostic tests
pytest -m "not slow" # Exclude slow tests
pytest -m "unit and not slow" # Fast unit tests for CI
Run specific test suites:
pytest tests/test_mcp_system_monitor_server_comprehensive.py # All MCP endpoints
pytest tests/test_mcp_server_integration.py # Integration tests
pytest tests/test_architecture_agnostic.py # Cross-platform tests
Run with coverage:
pytest --cov=mcp_system_monitor_server --cov-report=html
Test Coverage
Complete Coverage:
- 15 MCP Tools (9 basic + 6 Phase 1 performance)
- 6 MCP Resources (3 basic + 3 Phase 1 performance)
- 11 Collectors (7 basic + 4 Phase 1 performance)
- Cross-platform compatibility testing
- Performance benchmarking and stress testing
- Error handling and edge case validation
Performance Benchmarks:
- System snapshot collection: < 5 seconds
- Individual tool calls: < 1 second each
- Concurrent operations: 20 parallel calls < 10 seconds
Platform Support
| Feature | Windows | macOS | Linux |
|---|---|---|---|
| CPU Monitoring | ✅ | ✅ | ✅ |
| GPU Monitoring (NVIDIA) | ✅ | ✅ | ✅ |
| GPU Monitoring (AMD) | ⚠️ | ❌ | ⚠️ |
| GPU Monitoring (Intel) | ⚠️ | ❌ | ⚠️ |
| GPU Monitoring (Apple) | ❌ | ✅ | ❌ |
| Memory Monitoring | ✅ | ✅ | ✅ |
| Disk Monitoring | ✅ | ✅ | ✅ |
| Network Statistics | ✅ | ✅ | ✅ |
| Process Monitoring | ✅ | ✅ | ✅ |
| CPU Temperature | ⚠️ | ⚠️ | ✅ |
⚠️ = Limited support, depends on hardware/drivers
Troubleshooting
GPU Monitoring Not Working
NVIDIA GPUs:
- Ensure NVIDIA drivers are installed
- Check if
nvidia-smicommand works - The server will gracefully handle missing GPU libraries
Apple Silicon GPUs:
- Supported on macOS with M1, M2, and M3 chips
- Provides comprehensive information including unified memory and GPU core count
- Uses
system_profilercommand (available by default on macOS)
Permission Errors
- Some system information may require elevated privileges
- The server handles permission errors gracefully and skips inaccessible resources
High CPU Usage
- Adjust the monitoring frequency by modifying collector update intervals
- Use cached data methods to reduce system calls
- Default cache expiration is 2 seconds for most collectors
- Consider increasing
max_ageparameter inget_cached_data()calls for less frequent updates
Performance Considerations
- The server uses intelligent caching to minimize system calls
- Each collector maintains its own cache with configurable expiration
- Continuous monitoring tools (like
monitor_cpu_usage) bypass caching for real-time data - For high-frequency polling, consider using the resource endpoints which leverage caching
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Built with FastMCP
- System monitoring via psutil
- NVIDIA GPU support via nvidia-ml-py
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.