DAZ Command MCP Server
An MCP server that provides session-based shell command execution and file management with intelligent LLM-powered summarization. It enables users to manage isolated workflows and track progress through automated event logging and session history.
README
DAZ Command MCP Server

A Model Context Protocol (MCP) server that provides session-based command execution with intelligent LLM-powered summarization.
๐ Features
- ๐ง Session Management: Create, open, and manage isolated command execution sessions
- โก Command Execution: Run shell commands with timeout controls and working directory management
- ๐ File Operations: Read and write text files with comprehensive error handling
- ๐ค LLM Summarization: Automatic session progress tracking using structured LLM responses
- ๐ Event Logging: Complete audit trail of all operations within sessions
- ๐ Thread-Safe: Robust concurrent operation with proper synchronization
๐ฆ Installation
Prerequisites
- Python 3.8+
fastmcplibrarydazllmlibrary for LLM integration
Quick Setup
- Clone this repository:
git clone https://github.com/yourusername/daz-command-mcp.git
cd daz-command-mcp
- Install dependencies:
pip install -r requirements.txt
- Configure your LLM model in the script (default:
lm-studio:openai/gpt-oss-20b)
๐ฏ Usage
Starting the Server
python main.py
Available Tools
Session Management
daz_sessions_list()- List all sessions and identify the active onedaz_session_create(name, description)- Create and activate a new sessiondaz_session_open(session_id)- Open and activate an existing sessiondaz_session_current()- Get details of the currently active sessiondaz_session_close()- Close the current sessiondaz_session_rename(old_name, new_name)- Rename an existing sessiondaz_session_delete(session_name)- Delete a session by moving to deleted_sessions
Command & File Operations
All command and file operations require an active session and context parameters:
daz_command_cd(directory, current_task, summary_of_what_we_just_did, summary_of_what_we_about_to_do)- Change working directorydaz_command_read(file_path, current_task, summary_of_what_we_just_did, summary_of_what_we_about_to_do)- Read a text filedaz_command_write(file_path, content, current_task, summary_of_what_we_just_did, summary_of_what_we_about_to_do)- Write a text filedaz_command_run(command, current_task, summary_of_what_we_just_did, summary_of_what_we_about_to_do, timeout=60)- Execute shell commands
Learning & Instructions
daz_add_learnings(learning_info)- Add important discoveries and context to the sessiondaz_instructions_read()- Read current session instructionsdaz_instructions_add(instruction)- Add a new instruction to the sessiondaz_instructions_replace(instructions)- Replace all instructions with a new listdaz_record_user_request(user_request)- Record a user request at the start of multi-step tasks
Example Workflow
# Create a new session
daz_session_create("Setup Project", "Setting up a new Python project with dependencies")
# Navigate to project directory
daz_command_cd("/path/to/project",
"Setting up Python project",
"Created new session for project setup",
"Navigate to project root directory")
# Run commands
daz_command_run("pip install -r requirements.txt",
"Setting up Python project",
"Navigated to project directory",
"Install project dependencies")
# Read configuration
daz_command_read("config.json",
"Setting up Python project",
"Installed dependencies successfully",
"Review current configuration settings")
# Write new file
daz_command_write("setup.py", "...",
"Setting up Python project",
"Reviewed configuration file",
"Create package setup file")
๐๏ธ Architecture
Session Storage
Sessions are stored as JSON files in the sessions/ directory with the following structure:
{
"id": "unique-session-id",
"name": "Session Name",
"description": "Detailed description",
"created_at": 1692123456.789,
"updated_at": 1692123456.789,
"summary": "LLM-generated summary",
"progress": "Current progress status",
"current_directory": "/current/working/dir",
"events_count": 42
}
Event Logging
Every operation is logged with comprehensive details in event_log.jsonl:
{
"timestamp": 1692123456.789,
"type": "run|read|write|cd|user_request|learning",
"current_task": "The task being worked on",
"summary_of_what_we_just_did": "What was just completed",
"summary_of_what_we_about_to_do": "What's planned next",
"inputs": {...},
"outputs": {...},
"duration": 0.123
}
LLM Integration
The server uses asynchronous LLM processing to maintain session summaries:
- ๐ Background Processing: Summarization runs in a separate thread
- ๐ก๏ธ Fault Tolerance: LLM failures don't affect MCP operations
- ๐ Structured Output: Uses Pydantic models for reliable parsing
- โ๏ธ Configurable Model: Easy to switch between different LLM providers
โ๏ธ Configuration
LLM Model
Edit the LLM_MODEL_NAME constant in src/models.py:
LLM_MODEL_NAME = "your-model-name"
Session Directory
Sessions are stored in ./sessions/ by default. This can be modified by changing the SESSIONS_DIR constant in src/models.py.
๐ ๏ธ Error Handling
- ๐ Graceful Degradation: Operations continue even if LLM summarization fails
- ๐ Comprehensive Logging: All errors are logged to stderr
- โ Input Validation: Robust parameter checking and sanitization
- ๐ File Safety: Atomic file operations prevent corruption
๐ Integration
This MCP server integrates with Claude Desktop and other MCP-compatible clients. Add it to your MCP configuration:
{
"mcpServers": {
"daz-command": {
"command": "python",
"args": ["/path/to/main.py"]
}
}
}
๐ Project Structure
daz-command-mcp/
โโโ README.md # This file
โโโ main.py # Entry point
โโโ requirements.txt # Dependencies
โโโ images/ # Documentation images
โโโ sessions/ # Session storage (auto-created)
โโโ src/ # Source code
โโโ __init__.py
โโโ command_executor.py # Command execution logic
โโโ history_manager.py # Session history management
โโโ mcp_tools.py # MCP tool definitions
โโโ models.py # Data models and constants
โโโ session_manager.py # Session lifecycle management
โโโ summary_generator.py # LLM summary generation
โโโ summary_worker.py # Background summarization worker
โโโ utils.py # Utility functions
โโโ tests/ # Unit tests
โโโ test_add_learnings.py
โโโ test_initialization_fix.py
โโโ test_llm_system_integration.py
โโโ test_new_parameter_system.py
โโโ test_summary_generation.py
๐งช Testing
Run the comprehensive test suite:
# Run all tests
python -m pytest src/tests/
# Run specific test
python -m pytest src/tests/test_summary_generation.py -v
# Run with coverage
python -m pytest src/tests/ --cov=src
๐ค Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Submit a pull request
๐ License
[Add your license here]
๐ฆ Dependencies
- fastmcp: MCP server framework
- dazllm: LLM integration library
๐ฌ Support
For issues and questions, please open an issue on GitHub or contact [your contact information].
Built with โค๏ธ for the Model Context Protocol ecosystem
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.