GitHub Analytics MCP Server
Enables querying and analysis of public GitHub repositories for statistics, contributor data, and commit history. It provides both a RESTful API and an MCP interface for seamless integration with AI agents.
README
GitHub Analytics MCP Server โ Architecture Reference Project
Query, analyze, and visualize any public GitHub repository โ from the command line, browser, or AI agent.
Overview
GitHub Analytics MCP Server is a production-ready microservice that turns the GitHub API into a simple, self-hosted analytics endpoint. Point it at any public repository and instantly get structured data on stars, forks, contributors, commit history, and language distribution.
It exposes two interfaces: a RESTful API (FastAPI with auto-generated Swagger docs) for direct HTTP access, and a Model Context Protocol (MCP) server that lets AI agents like Claude Desktop query GitHub data as a native tool.
The entire stack โ API gateway, MCP server, container orchestration, infrastructure provisioning, and CI/CD โ is included and deployable with a single command.
This project also serves as an architecture reference implementation: every layer is accompanied by design-decision documentation explaining why it is structured this way, not just what it does.
Features
- ๐ Query any public GitHub repository by owner/name
- ๐ Repository statistics โ stars, forks, issues, watchers
- ๐ฅ Contributor analysis โ top contributors with commit counts
- ๐ Commit history โ recent commits with author and message details
- ๐ RESTful API with auto-generated OpenAPI/Swagger docs
- ๐ค MCP Protocol support for AI agent integration (Claude Desktop, etc.)
- ๐ณ Production-ready with Docker multi-stage builds and Docker Compose
- โธ๏ธ Kubernetes deployment with Deployments, Services, Ingress, and HPA
- ๐ Auto-scaling โ Horizontal Pod Autoscaler (2โ5 replicas, 70% CPU target)
- ๐ Full CI/CD pipeline โ lint, test, build, and deploy via GitHub Actions
- ๐๏ธ Infrastructure as Code โ Terraform provisions the entire K8s stack
Why This Project?
| Concern | This Project | Traditional Approach |
|---|---|---|
| Setup | docker-compose up or make k8s-deploy |
Manual server provisioning |
| Scalability | Auto-scaling with K8s HPA (2โ5 replicas) | Manual capacity planning |
| Infrastructure | terraform apply โ one command |
Multiple manual steps |
| High Availability | Multi-replica with health checks | Complex setup required |
| Monitoring | Liveness & readiness probes built in | Separate monitoring stack |
| Deployment | Automated CI/CD on every push | Manual release process |
| Portability | Runs anywhere Docker/K8s runs | Environment-dependent |
| API Docs | Auto-generated OpenAPI (Swagger UI) | Manual documentation |
This is not just a tool โ it is a reference implementation designed for studying architecture patterns. Every layer includes design-decision documentation explaining the reasoning behind its structure.
Architecture
graph TB
subgraph "User Interface"
A[Web Browser / CLI]
end
subgraph "API Layer"
B[FastAPI Gateway<br/>Port 8080]
C[MCP Server<br/>stdio mode]
end
subgraph "Container Orchestration"
D[Kubernetes Cluster]
E[Docker Containers]
F[Auto-scaling HPA]
end
subgraph "External Services"
G[GitHub API]
end
subgraph "Infrastructure"
H[Terraform IaC]
I[CI/CD Pipeline]
end
A -->|HTTP/REST| B
A -->|MCP Protocol| C
B -->|GitHub Token| G
C -->|GitHub Token| G
B -.->|Deployed in| D
C -.->|Deployed in| D
D -->|Manages| E
D -->|Auto-scales| F
H -.->|Provisions| D
I -.->|Deploys to| D
Design Philosophy
One domain, two interfaces, shared core
GitHubClient is the single business-logic layer. The MCP Server and FastAPI Gateway are both thin adapters โ they translate between their respective protocols and the shared core. Neither contains business logic, and neither duplicates the other.
Why two interfaces: MCP serves AI agents over stdio; REST serves humans and programs over HTTP. Two protocols, two adapters, zero duplicated logic.
Error handling strategy
Custom exception hierarchy (RepositoryNotFoundError, AuthenticationError, RateLimitError) translates GitHub HTTP status codes into semantic domain errors. The MCP server converts these into user-friendly text messages; the FastAPI gateway converts them into the corresponding HTTP status codes (404/401/429/502). Callers never need to know how the GitHub API works internally.
Infrastructure: three layers for three use cases
- Docker Compose โ local development. One command (
docker-compose up) starts everything. - Kubernetes manifests (
k8s/) โ directkubectl apply. Good for learning K8s and quick testing. - Terraform (
terraform/) โ state management, drift detection, multi-environment support. For production.
All three coexist intentionally. Each serves a different stage of the deployment lifecycle.
Why these numbers
- HPA 2-5 replicas: 2 guarantees availability (one pod can fail without downtime); 5 is a cost ceiling.
- 70% CPU threshold: leaves 30% buffer so existing pods absorb traffic spikes while new pods start (10-30s scheduling window).
- Resource limits (100m/500m CPU, 128Mi/256Mi memory): FastAPI + uvicorn idles at ~30m CPU / ~50MB RAM. Limits prevent a runaway process from starving other pods.
Deliberate omissions
- No database โ this is a stateless proxy. Every request fetches fresh data from GitHub. Adding a DB would obscure the core architecture pattern.
- Redis is optional โ available via
docker-compose --profile with-cache upto demonstrate Docker Compose profiles, but not wired into the application. - No auth middleware โ authentication is orthogonal to the architecture being demonstrated. Including it would distract from the layered design.
Architecture Documentation
For deeper dives into specific decisions:
- ARCHITECTURE.md โ full architecture overview with layer diagram
- Architecture Decision Records (ADRs):
Quick Start
Option 1: Docker Compose (Fastest)
# 1. Clone and configure
git clone https://github.com/Pyroxyl/github-analytics-mcp.git
cd github-analytics-mcp
cp .env.example .env
# Edit .env and add your GITHUB_TOKEN
# 2. Start services
docker-compose up -d
# 3. Test the API
curl http://localhost:8080/health
curl http://localhost:8080/api/v1/repo/facebook/react/stats | jq
Option 2: Kubernetes (Production)
# 1. Build and deploy
make build
make k8s-deploy
# 2. Access the API (LoadBalancer on port 80)
curl http://localhost/health
curl http://localhost/api/v1/repo/facebook/react/stats | jq
Option 3: Terraform (Full IaC)
cd terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars
terraform init
terraform plan
terraform apply
Usage Examples
Repository Statistics
curl "http://localhost/api/v1/repo/facebook/react/stats" | jq
{
"repository": "facebook/react",
"stars": 242591,
"forks": 50472,
"open_issues": 1138,
"watchers": 6690,
"description": "The library for web and native user interfaces.",
"language": "JavaScript"
}
Recent Commits
curl "http://localhost/api/v1/repo/anthropics/anthropic-sdk-python/commits?limit=3" | jq
Top Contributors
curl "http://localhost/api/v1/repo/kubernetes/kubernetes/contributors?top_n=5" | jq
Language Distribution
curl "http://localhost/api/v1/repo/microsoft/vscode/languages" | jq
{
"repository": "microsoft/vscode",
"languages": {
"TypeScript": 95.54,
"CSS": 1.49,
"JavaScript": 1.0,
"Rust": 0.61
}
}
Compare Projects
# Compare stars across projects
curl -s "http://localhost/api/v1/repo/facebook/react/stats" | jq '.stars'
curl -s "http://localhost/api/v1/repo/vuejs/vue/stats" | jq '.stars'
Interactive API Documentation
๐ Live API Docs: http://localhost/docs (or http://localhost:8080/docs for Docker Compose)
FastAPI auto-generates interactive Swagger UI where you can:
- ๐ Browse all available endpoints
- ๐ฎ Test APIs directly in your browser with "Try it out"
- ๐ View request/response schemas
- ๐ก See example values for all parameters
- โจ Execute real API calls and see live responses
MCP Client Configuration
Add to your MCP client configuration (e.g., Claude Desktop):
{
"mcpServers": {
"github-analytics": {
"command": "python",
"args": ["-m", "src.server"],
"cwd": "/path/to/github-analytics-mcp",
"env": {
"GITHUB_TOKEN": "your_token_here"
}
}
}
}
Or using Docker:
{
"mcpServers": {
"github-analytics": {
"command": "docker",
"args": ["run", "--rm", "-i", "--env-file", ".env", "github-analytics-mcp"],
"cwd": "/path/to/github-analytics-mcp"
}
}
}
Tech Stack
| Layer | Technology |
|---|---|
| Backend | Python 3.11+, FastAPI, PyGithub |
| Protocol | Model Context Protocol (MCP) |
| Containerization | Docker (multi-stage builds), Docker Compose |
| Orchestration | Kubernetes โ Deployments, Services, HPA, Ingress |
| Infrastructure | Terraform |
| CI/CD | GitHub Actions (lint โ test โ build โ deploy) |
DevOps Highlights
- Multi-stage Docker builds for minimal image size
- Kubernetes auto-scaling (2โ5 replicas based on CPU)
- Liveness & readiness probes for self-healing
- Rolling updates with zero downtime
- Automated lint, test, build, and deploy pipeline
Project Structure
github-analytics-mcp/
โโโ src/ # MCP Server
โ โโโ server.py # MCP protocol entry point
โ โโโ github_client.py # GitHub API client wrapper
โ โโโ tools/ # MCP tool implementations
โ โโโ repo_stats.py # get_repo_stats
โ โโโ commits.py # list_recent_commits
โ โโโ contributors.py # analyze_contributors
โ โโโ languages.py # get_language_breakdown
โโโ api/ # FastAPI Gateway
โ โโโ main.py # App entry point
โ โโโ routes.py # API route definitions
โ โโโ models.py # Pydantic models
โ โโโ dependencies.py # Dependency injection
โโโ k8s/ # Kubernetes manifests
โ โโโ namespace.yaml
โ โโโ configmap.yaml
โ โโโ secret.yaml
โ โโโ deployment-api.yaml # API gateway (2 replicas)
โ โโโ deployment-mcp.yaml # MCP server
โ โโโ service-api.yaml # LoadBalancer service
โ โโโ hpa-api.yaml # Horizontal Pod Autoscaler
โ โโโ ingress.yaml
โ โโโ deploy.sh # Deployment script
โโโ terraform/ # Infrastructure as Code
โ โโโ main.tf
โ โโโ kubernetes.tf
โ โโโ providers.tf
โ โโโ variables.tf
โ โโโ outputs.tf
โโโ .github/workflows/ # CI/CD pipelines
โ โโโ ci.yml # Lint & test
โ โโโ docker-build.yml # Build & push image
โ โโโ cd.yml # Deploy to K8s
โโโ tests/ # Unit tests
โโโ Dockerfile # Multi-stage container build
โโโ docker-compose.yml # Local multi-service setup
โโโ Makefile # Convenience commands
โโโ requirements.txt
โโโ .env.example # Environment template
Development
Prerequisites
- Python 3.11+
- Docker & Docker Compose
- kubectl (for Kubernetes deployment)
- Terraform (for IaC deployment)
- GitHub Personal Access Token (create one here)
Local Development
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Run the MCP server
python -m src.server
# Run the API gateway
uvicorn api.main:app --reload --port 8080
# Run tests
pytest tests/
Make Commands
| Command | Description |
|---|---|
make build |
Build Docker image |
make run |
Start with Docker Compose |
make stop |
Stop all containers |
make logs |
View container logs |
make k8s-deploy |
Deploy to Kubernetes |
make k8s-status |
Check K8s pod/service status |
make clean |
Remove containers and images |
make help |
Show all available commands |
CI/CD Pipeline
Push/PR โ [CI] Lint + Test โ [Docker Build] โ ghcr.io โ [CD] โ Kubernetes
- CI โ Runs
rufflint andpyteston every push/PR (Python 3.11 & 3.12) - Docker Build โ Builds and pushes images to GitHub Container Registry
- CD โ Deploys to Kubernetes via Terraform after successful build
See .github/workflows/README.md for details.
Production Deployment
High Availability
- 2+ API gateway replicas with rolling updates
- Automatic pod restart on failure via liveness probes
- Readiness probes prevent traffic to unhealthy pods
Auto-Scaling
- HPA scales from 2 to 5 replicas
- Target: 70% CPU utilization
- Handles traffic spikes automatically
Security
- GitHub tokens stored as Kubernetes Secrets
- No credentials in source code or git history
- Ingress-ready for TLS termination
Use Cases
- ๐ Project Evaluation โ Quickly assess GitHub projects before adopting them
- ๐ Trend Research โ Analyze language trends across popular repositories
- ๐ค AI Integration โ Enable AI agents to access GitHub data via MCP
- ๐ Metrics Dashboards โ Build custom dashboards with real-time GitHub stats
- ๐ฌ Open Source Research โ Study contributor patterns and project health
Roadmap
- [ ] Redis caching layer for API responses
- [ ] Prometheus metrics & Grafana dashboards
- [ ] Rate limiting & API key authentication
- [ ] Additional endpoints (pull requests, releases, workflows)
- [ ] Multi-cloud examples (AWS EKS, GCP GKE, Azure AKS)
Contributing
See CONTRIBUTING.md for development workflow and guidelines.
License
This project is licensed under the MIT License โ see the LICENSE file for details.
Acknowledgments
Built with Model Context Protocol by Anthropic, FastAPI, and PyGithub.
โญ If you find this project useful, please star it on GitHub!
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.