Telegram MCP Server
Enables AI assistants to interact with users via Telegram to request decisions, approvals, or specific input through text and clickable buttons. This facilitates a human-in-the-loop workflow where the AI can pause for feedback or send status notifications during long-running tasks.
README
Telegram MCP Server
A Model Context Protocol (MCP) server that enables AI assistants (like Kilo Code) to ask you questions via Telegram and wait for your responses. This creates a "human-in-the-loop" workflow where the AI can request decisions, approvals, or specific input during long-running tasks.
Features
- š¤ Interactive AI Workflow: AI can pause and ask you questions via Telegram
- š± Button Support: Present multiple choice options as clickable buttons
- ā±ļø Long Polling: Waits up to 2 minutes for your response
- š Secure: Uses environment variables for credentials
- šÆ Simple Integration: Works with any MCP-compatible AI assistant
- š³ Docker Support: Run natively or in a container
Prerequisites
- Python 3.10+ (for native installation)
- OR Docker and Docker Compose (for containerized installation)
- A Telegram account
- A Telegram Bot Token (from @BotFather)
Installation Methods
You can run this MCP server in two ways:
- Native Python Installation - Run directly on your system
- Docker Installation - Run in a container (recommended for production)
Native Installation
1. Create a Telegram Bot
- Open Telegram and search for @BotFather
- Send
/newbotand follow the prompts - Name your bot (e.g., "MyDevBot")
- Copy the HTTP API Token provided
- Important: Send
/startto your new bot so it can message you
2. Find Your Telegram User ID (Optional)
- Search for @userinfobot on Telegram
- Send it any message
- Copy your User ID
3. Clone and Setup
# Clone the repository
git clone https://github.com/yourusername/telegram-mcp-server.git
cd telegram-mcp-server
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate # On Linux/Mac
# OR
venv\Scripts\activate # On Windows
# Install dependencies
pip install -r requirements.txt
4. Configuration
Create a .env file from the sample:
cp .env.sample .env
Edit .env and add your credentials:
TELEGRAM_BOT_TOKEN=your_bot_token_here
TELEGRAM_USER_ID=your_user_id_here # Optional - bot will auto-detect if not set
ā ļø Security Note: The .env file is gitignored to protect your credentials.
5. Test the Server Locally
# Activate virtual environment
source venv/bin/activate
# Run the server
python telegram_server.py
You should see:
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā āāā āāā āāā āāā āāāāā āāā āāā ā
ā āā āāā āāā ā ā ā ā āāā āāā ā
ā FastMCP 2.14.2 ā
ā š„ Server: Telegram Human Loop ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
Press Ctrl+C to stop the test.
Docker Installation
1. Configure Environment
cp .env.sample .env
# Edit .env with your credentials
2. Build and Run
# Build the Docker image
docker-compose build
# Start the container in detached mode
docker-compose up -d
3. View Logs
docker-compose logs -f telegram-mcp
4. Stop the Server
docker-compose down
Docker Commands Reference
Building:
# Build the image
docker-compose build
# Build without cache (force rebuild)
docker-compose build --no-cache
Running:
# Start in detached mode (background)
docker-compose up -d
# Start in foreground (see logs directly)
docker-compose up
# Start and rebuild if needed
docker-compose up -d --build
Monitoring:
# View logs
docker-compose logs -f telegram-mcp
# View last 100 lines of logs
docker-compose logs --tail=100 telegram-mcp
# Check container status
docker-compose ps
# Execute commands inside the container
docker-compose exec telegram-mcp python --version
Stopping and Cleaning:
# Stop the container
docker-compose stop
# Stop and remove containers
docker-compose down
# Stop, remove containers, and remove volumes
docker-compose down -v
# Remove all (containers, networks, images)
docker-compose down --rmi all
Restart:
# Restart the container
docker-compose restart
# Rebuild and restart from scratch
docker-compose down
docker-compose build --no-cache
docker-compose up -d
Development:
# Open a shell in the running container
docker-compose exec telegram-mcp /bin/bash
# Test the server locally (without Docker)
source venv/bin/activate && python telegram_server.py
Connecting to MCP Clients
For Kilo Code (or similar MCP clients)
Method 1: Using the Startup Script (Recommended)
-
Open your MCP settings file:
- Location:
~/.config/Code/User/globalStorage/rooveterinaryinc.roo-cline/settings/mcpSettings.json - Or use your client's UI: Settings ā MCP Servers ā Edit Configuration
- Location:
-
Add this configuration (replace
/absolute/path/to/telegram-mcp-serverwith your actual installation path):
{
"mcpServers": {
"telegram": {
"command": "bash",
"args": ["/absolute/path/to/telegram-mcp-server/run.sh"]
}
}
}
Example: If you cloned to /home/user/projects/telegram-mcp-server, use:
{
"mcpServers": {
"telegram": {
"command": "bash",
"args": ["/home/user/projects/telegram-mcp-server/run.sh"]
}
}
}
- Restart your MCP client or reload the MCP servers
Note: An example configuration is provided in mcp-config.example.json for reference.
Method 2: Direct Python Execution
{
"mcpServers": {
"telegram": {
"command": "/absolute/path/to/telegram-mcp-server/venv/bin/python",
"args": ["/absolute/path/to/telegram-mcp-server/telegram_server.py"]
}
}
}
Method 3: Using Docker
Option A: Docker Compose Exec
{
"mcpServers": {
"telegram": {
"command": "docker-compose",
"args": [
"-f",
"/absolute/path/to/telegram-mcp-server/docker-compose.yml",
"exec",
"-T",
"telegram-mcp",
"python",
"telegram_server.py"
],
"cwd": "/absolute/path/to/telegram-mcp-server"
}
}
}
Option B: Docker Run
{
"mcpServers": {
"telegram": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--env-file",
"/absolute/path/to/telegram-mcp-server/.env",
"telegram-mcp-server",
"python",
"telegram_server.py"
]
}
}
}
Available Tools
Once connected, your AI assistant will have access to these tools:
-
ask_human(question, options, wait, timeout_seconds, allow_custom) - Send questions to Telegram
wait=True(default): Blocks until you respond or timeoutwait=False: Sends question and returns immediately (non-blocking mode)options: Optional list of button choicesallow_custom=True(default): Adds "Custom answer" button when options provided- Works in both blocking and non-blocking modes
-
get_telegram_response(mark_as_read) - Retrieve your latest Telegram message
- Use after
ask_human(wait=False)to get your response mark_as_read=True(default): Won't retrieve the same message twice
- Use after
-
send_telegram_notification(message) - Send one-way status updates
- For progress reports, completion notifications, or status updates
- Doesn't expect a response
- Perfect for keeping you informed during long-running tasks
-
list_telegram_messages(limit) - View recent conversation history
- Shows last 5-20 messages for context
- Useful for checking if you've already replied
Usage Examples
Example 1: Simple Question
You say to your AI:
"I'm going to grab coffee. If you need to know which database to use, ask me via Telegram."
AI calls:
ask_human(question="Should I use PostgreSQL or MongoDB for this project?")
What happens:
- Your phone buzzes with a Telegram message
- You reply: "PostgreSQL"
- AI receives "PostgreSQL" and continues
Example 2: Multiple Choice with Buttons
AI calls:
ask_human(
question="How should I structure the authentication?",
options=["JWT", "Session Cookies", "OAuth2", "Skip for now"]
)
What happens:
- You receive a Telegram message with 4 clickable buttons
- You tap "JWT"
- AI receives "JWT" instantly
Example 3: Approval Workflow
You say to your AI:
"Refactor the entire codebase, but ask me before making any breaking changes."
AI calls:
ask_human(
question="I want to rename `getUserData()` to `fetchUser()`. This will break 23 files. Proceed?",
options=["Yes, proceed", "No, skip this", "Show me the files first"]
)
Example 4: Non-Blocking Mode for Complex Questions
AI asks without waiting:
ask_human(
question="Please review this 500-line refactor and provide detailed feedback",
wait=False
)
What happens:
- You receive the question on Telegram
- You take your time to review (no timeout)
- When ready, you tell the AI: "I've answered on Telegram"
- AI retrieves your answer:
get_telegram_response()
Example 5: Progress Notifications
AI sends status updates:
send_telegram_notification("š Starting database migration...")
send_telegram_notification("ā
Step 1/5 complete: Schema created")
send_telegram_notification("š Migration complete! All 1,247 records migrated successfully.")
How It Works
- AI calls
ask_human()with a question and optional button choices - Server sends Telegram message to your configured chat
- Long polling loop checks Telegram every 2 seconds for your response
- You respond via text or button click
- Server returns your answer to the AI
- AI continues with your input
Timeout Behavior
- Default timeout: 120 seconds (2 minutes)
- Configurable via the
timeout_secondsparameter inask_human() - If you don't respond in time, the AI receives:
"Timeout: User did not respond in time..."
Non-Blocking Mode
For complex questions that require extended thinking time, use non-blocking mode to avoid timeouts:
-
AI asks without waiting:
ask_human("Please review this architecture and provide feedback", wait=False) -
You receive the question and can take as long as needed to think and respond
-
When ready, tell the AI you've replied:
"I've answered on Telegram"
-
AI retrieves your answer:
answer = get_telegram_response()
Custom Answers with Buttons
When you provide button options, the AI automatically adds a "āļø Custom answer" button (unless allow_custom=False). This lets you:
- Click a button for quick selection
- OR type your own custom response
Example:
ask_human(
"Which database?",
options=["PostgreSQL", "MongoDB", "MySQL"],
wait=False
)
You'll see 4 buttons:
- PostgreSQL
- MongoDB
- MySQL
- āļø Custom answer (type below)
Best Practices
Use Non-Blocking Mode (wait=False) when:
- Questions require code review or deep analysis
- You might be away from your device
- The decision requires research or consultation
- You need more than 2 minutes to respond
Use Blocking Mode (wait=True) when:
- Questions are simple yes/no decisions
- You're actively working with the AI
- Quick responses are expected
- Using button options for multiple choice
Use Notifications (send_telegram_notification) for:
- Progress updates during long-running tasks
- Task completion notifications
- Error or warning alerts
- Status updates that don't require a response
- Keeping yourself informed while away from the computer
Project Structure
telegram-mcp-server/
āāā telegram_server.py # Main MCP server code
āāā .env # Your credentials (gitignored)
āāā .env.sample # Template for environment variables
āāā .gitignore # Protects secrets
āāā requirements.txt # Python dependencies
āāā run.sh # Startup script (executable)
āāā mcp-config.example.json # Example MCP configuration (customize for your setup)
āāā Dockerfile # Docker image definition
āāā docker-compose.yml # Docker Compose configuration
āāā .dockerignore # Docker build exclusions
āāā README.md # This file
Troubleshooting
"Command 'python' not found"
Solution: Use python3 instead, or install the symlink:
sudo apt install python-is-python3
The run.sh script already uses the correct Python from the virtual environment.
"Error: Could not find a Telegram Chat ID"
Solution: Make sure you've sent /start to your bot at least once.
"Permission denied: ./run.sh"
Solution: Make the script executable:
chmod +x run.sh
"Timeout: User did not respond in time"
Solutions:
- Respond faster (within 2 minutes)
- Increase timeout in the
ask_human()call - Use non-blocking mode (
wait=False) for complex questions
MCP Server Not Showing in Client
Solutions:
- Check the MCP settings file path is correct
- Verify the absolute path in your configuration
- Restart your MCP client completely
- Check client logs for connection errors
Docker Container Won't Start
Check logs:
docker-compose logs telegram-mcp
Environment Variables Not Loading
Ensure .env file exists and is properly formatted:
cat .env
Permission Issues with Docker
If you encounter permission issues, ensure the .env file is readable:
chmod 644 .env
Advanced Configuration
Changing the Default Timeout
You can customize the timeout per question:
# Short timeout for quick questions
ask_human("Proceed?", options=["Yes", "No"], timeout_seconds=30)
# Longer timeout for thoughtful questions
ask_human("Which approach?", timeout_seconds=300) # 5 minutes
Restricting to Specific User
The server uses TELEGRAM_USER_ID from .env. Only messages from this user ID will be accepted.
Using Multiple Bots
Create separate directories with different .env files and register each as a different MCP server:
{
"mcpServers": {
"telegram-work": {
"command": "bash",
"args": ["/path/to/work-bot/run.sh"]
},
"telegram-personal": {
"command": "bash",
"args": ["/path/to/personal-bot/run.sh"]
}
}
}
Security Best Practices
- ā
Never commit
.env- Already in.gitignore - ā
Use
TELEGRAM_USER_ID- Prevents unauthorized users from controlling your AI - ā Regenerate tokens if accidentally exposed
- ā Use private Telegram bots - Don't share your bot with others
- ā Use Docker secrets for sensitive data in production
- ā Regularly update the base Python image for security patches
Production Deployment
For production deployment:
- Remove development volume mounts from
docker-compose.yml - Use environment variables or secrets management instead of
.envfile - Consider using a process manager or orchestration tool (Kubernetes, Docker Swarm)
- Set up proper logging and monitoring
- Configure restart policies appropriately
Example production docker-compose.yml:
version: '3.8'
services:
telegram-mcp:
build:
context: .
dockerfile: Dockerfile
container_name: telegram-mcp-server
environment:
- TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
- TELEGRAM_USER_ID=${TELEGRAM_USER_ID}
- PYTHONUNBUFFERED=1
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Dependencies
- fastmcp (2.14.2+) - MCP server framework
- httpx (0.28.1+) - Async HTTP client for Telegram API
- python-dotenv (1.2.1+) - Environment variable management
See requirements.txt for full list.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - Use freely!
Support
For issues with:
- MCP Protocol: Check Model Context Protocol docs
- FastMCP: Visit FastMCP documentation
- Telegram Bot API: See Telegram Bot API docs
Changelog
v1.0.0 (2026-01-02)
- Initial release
- Support for blocking and non-blocking question modes
- Button support for multiple choice questions
- Progress notification system
- Docker support
- Comprehensive documentation
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.