Log Analyzer MCP Server
An MCP server for intelligent log analysis providing semantic search, error pattern clustering, and smart error detection. It enables users to process, vectorize, and query local logs to efficiently identify issues and generate AI-powered summaries.
README
Log Analyzer MCP Server š
100% Local | FAISS-Powered | No Cloud APIs | 30-150x Faster
A Model Context Protocol (MCP) server for intelligent log analysis with semantic search, error detection, and pattern clustering. Runs entirely locally using sentence-transformers and FAISS.
⨠Features
- š Semantic Search - Find logs by meaning, not just keywords
- ā” FAISS Vector Search - 30-150x faster than traditional search
- š Smart Error Detection - Automatic error pattern clustering
- š¾ Intelligent Caching - Lightning-fast re-indexing
- š 100% Local - No cloud APIs, no costs, privacy-first
- š Hybrid Retrieval - Combines semantic + lexical matching
šÆ Quick Start (Production)
Using uvx (Recommended)
# Install uv
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
Claude Desktop Config:
{
"mcpServers": {
"log-analyzer": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/suriya-ML/log-checker-mcp.git",
"log-analyzer-mcp"
]
}
}
}
Config Location: C:\Users\YOUR-USERNAME\AppData\Roaming\Claude\claude_desktop_config.json
Restart Claude Desktop and you're done! ā
š¦ Manual Installation
1. Clone the Repository
git clone https://github.com/suriya-ML/log-checker-mcp.git
cd log-checker-mcp
2. Install Dependencies
pip install -r requirements.txt
3. Configure Environment Variables
Create a .env file in the project root:
cp .env.example .env
Edit .env and add your AWS credentials:
AWS_ACCESS_KEY_ID=your_access_key_here
AWS_SECRET_ACCESS_KEY=your_secret_key_here
AWS_REGION=us-east-2
Usage
Running the Server Locally
python server.py
Configuring with Claude Desktop
Add to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"log-analyzer": {
"command": "python",
"args": ["/absolute/path/to/log-analyzer-mcp/server.py"],
"env": {
"AWS_ACCESS_KEY_ID": "your_key",
"AWS_SECRET_ACCESS_KEY": "your_secret",
"AWS_REGION": "us-east-2"
}
}
}
}
Available Tools
1. fetch_local_logs
Fetch and chunk log files from a local directory.
Parameters:
input_folder(optional): Path to folder containing log files (default: ./logs)chunk_size(optional): Size of each chunk in characters (default: 4096)overlap(optional): Overlap between chunks in characters (default: 1024)
Example:
Use fetch_local_logs to process logs from /path/to/logs with chunk_size 5000
2. store_chunks_as_vectors
Vectorize log chunks with AWS Bedrock embeddings and intelligent caching.
Parameters:
use_cache(optional): Whether to use embedding cache (default: true)clear_cache(optional): Clear cache before starting (default: false)
Features:
- Extracts timeframes, class names, method names, error types
- Parallel processing for fast vectorization
- Persistent caching to avoid re-embedding
Example:
Use store_chunks_as_vectors to vectorize the logs
3. query_SFlogs
Query vectorized logs with semantic search and comprehensive analysis.
Parameters:
query(required): Natural language query
Features:
- Hybrid semantic + lexical search
- Automatic error clustering and deduplication
- Severity ranking and frequency analysis
- Metadata extraction (timeframes, classes, methods)
- AI-powered summarization
Examples:
Query logs: "What NullPointerExceptions occurred?"
Query logs: "Summarize all errors"
Query logs: "Show timeout issues in UserHandler"
Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
AWS_ACCESS_KEY_ID |
AWS access key | Required |
AWS_SECRET_ACCESS_KEY |
AWS secret key | Required |
AWS_REGION |
AWS region | us-east-2 |
AWS_CONNECT_TIMEOUT |
Connection timeout (seconds) | 60 |
AWS_READ_TIMEOUT |
Read timeout (seconds) | 300 |
BEDROCK_EMBED_MODEL_ID |
Embedding model | amazon.titan-embed-text-v2:0 |
BEDROCK_NOVA_MODEL_ID |
Analysis model | amazon.nova-premier-v1:0 |
LOG_FOLDER |
Default log folder | ./logs |
DEFAULT_CHUNK_SIZE |
Default chunk size | 4096 |
DEFAULT_OVERLAP |
Default overlap | 1024 |
Architecture
log-analyzer-mcp/
āāā server.py # Main MCP server implementation
āāā config.py # Configuration management
āāā utils/ # Utility modules
ā āāā logging_utils.py # Logging configuration
ā āāā file_utils.py # File operations
ā āāā bedrock_utils.py # AWS Bedrock integration
ā āāā chunking_utils.py # Text chunking
ā āāā error_extraction.py # Error pattern extraction
āāā logs/ # Log storage (created automatically)
āāā requirements.txt # Python dependencies
āāā .env.example # Environment template
āāā README.md # This file
How It Works
1. Log Processing Pipeline
Raw Logs ā Chunking ā Metadata Extraction ā Vectorization ā Storage
- Chunking: Split logs into overlapping chunks for better context preservation
- Metadata Extraction: Extract timeframes, class names, methods, error types
- Vectorization: Generate embeddings using AWS Bedrock
- Caching: Store embeddings for fast re-processing
2. Query Pipeline
Query ā Embedding ā Hybrid Search ā Error Clustering ā AI Analysis ā Results
- Hybrid Search: Combine semantic similarity with lexical matching
- Error Clustering: Group similar errors using fingerprinting
- Ranking: Sort by severity and frequency
- AI Analysis: Generate comprehensive summaries with AWS Bedrock
Performance
- Parallel Processing: Up to 5 concurrent embedding requests
- Intelligent Caching: 70-90% cache hit rate on repeated processing
- Adaptive Retrieval: Dynamic top-k based on query type
- Token Optimization: Smart budget management for AI analysis
Troubleshooting
Common Issues
"No vector JSON found"
- Run
store_chunks_as_vectorsfirst to vectorize your logs
"Bedrock authentication failed"
- Verify your AWS credentials in
.env - Ensure your AWS account has Bedrock access enabled
"No chunks found"
- Check that log files exist in the configured folder
- Verify file extensions (.log, .txt) are correct
Logging
Logs are written to stderr for MCP compatibility. To debug:
python server.py 2> debug.log
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
License
MIT License - see LICENSE file for details
Support
For issues and questions:
- GitHub Issues: Create an issue
- Documentation: Wiki
Roadmap
- [ ] Support for additional embedding models
- [ ] Real-time log streaming
- [ ] Web UI for visualization
- [ ] Multi-language support
- [ ] Enhanced error pattern detection
- [ ] Integration with monitoring tools
Acknowledgments
Built with:
- Model Context Protocol - MCP specification
- AWS Bedrock - AI/ML capabilities
- Anthropic Claude - AI analysis
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.