
MCP Task
Async MCP server for running long-running AI tasks with real-time progress monitoring, enabling users to start, monitor, and manage complex AI workflows across multiple models.
README
@just-every/mcp-task
Async MCP server for running long-running AI tasks with real-time progress monitoring using @just-every/task.
Quick Start
1. Create or use an environment file
Option A: Create a new .llm.env
file in your home directory:
# Download example env file
curl -o ~/.llm.env https://raw.githubusercontent.com/just-every/mcp-task/main/.env.example
# Edit with your API keys
nano ~/.llm.env
Option B: Use an existing .env
file (must use absolute path):
# Example: /Users/yourname/projects/myproject/.env
# Example: /home/yourname/workspace/.env
2. Install
Claude Code
# Using ~/.llm.env
claude mcp add task -s user -e ENV_FILE=$HOME/.llm.env -- npx -y @just-every/mcp-task
# Using existing .env file (absolute path required)
claude mcp add task -s user -e ENV_FILE=/absolute/path/to/your/.env -- npx -y @just-every/mcp-task
# For debugging, check if ENV_FILE is being passed correctly:
claude mcp list
Other MCP Clients
Add to your MCP configuration:
{
"mcpServers": {
"task": {
"command": "npx",
"args": ["-y", "@just-every/mcp-task"],
"env": {
"ENV_FILE": "/path/to/.llm.env"
}
}
}
}
Available Tools
run_task
Start a long-running AI task asynchronously. Returns a task ID immediately.
Parameters:
task
(required): The task prompt - what to performmodel
(optional): Model class or specific model namecontext
(optional): Background context for the taskoutput
(optional): The desired output/success state
Returns: Task ID for monitoring progress
check_task_status
Check the status of a running task with real-time progress updates.
Parameters:
task_id
(required): The task ID returned from run_task
Returns: Current status, progress summary, recent events, and tool calls
get_task_result
Get the final result of a completed task.
Parameters:
task_id
(required): The task ID returned from run_task
Returns: The complete output from the task
cancel_task
Cancel a pending or running task.
Parameters:
task_id
(required): The task ID to cancel
Returns: Cancellation status
list_tasks
List all tasks with their current status.
Parameters:
status_filter
(optional): Filter by status (pending, running, completed, failed, cancelled)
Returns: Task statistics and summaries
Example Workflow
// 1. Start a task
const startResponse = await callTool('run_task', {
"model": "standard",
"task": "Search for the latest AI news and summarize",
"output": "A bullet-point summary of 5 recent AI developments"
});
// Returns: { "task_id": "abc-123", "status": "pending", ... }
// 2. Check progress
const statusResponse = await callTool('check_task_status', {
"task_id": "abc-123"
});
// Returns: { "status": "running", "progress": "Searching for AI news...", ... }
// 3. Get result when complete
const resultResponse = await callTool('get_task_result', {
"task_id": "abc-123"
});
// Returns: The complete summary
Supported Models
Model Classes
reasoning
: Complex reasoning and analysisvision
: Image and visual processingstandard
: General purpose tasksmini
: Lightweight, fast responsesreasoning_mini
: Lightweight reasoningcode
: Code generation and analysiswriting
: Creative and professional writingsummary
: Text summarizationvision_mini
: Lightweight vision processinglong
: Long-form content generation
Popular Models
claude-opus-4
: Anthropic's most powerful modelgrok-4
: xAI's latest Grok modelgemini-2.5-pro
: Google's Gemini Proo3
,o3-pro
: OpenAI's o3 models- And any other model name supported by @just-every/ensemble
Integrated Tools
Tasks have access to:
- Web Search: Search the web for information using
@just-every/search
- Command Execution: Run shell commands via the
run_command
tool
API Keys
The task runner requires API keys for the AI models you want to use. Add them to your .llm.env
file:
# Core AI Models
ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key
XAI_API_KEY=your-xai-key # For Grok models
GOOGLE_API_KEY=your-google-key # For Gemini models
# Search Providers (optional, for web_search tool)
BRAVE_API_KEY=your-brave-key
SERPER_API_KEY=your-serper-key
PERPLEXITY_API_KEY=your-perplexity-key
OPENROUTER_API_KEY=your-openrouter-key
Getting API Keys
- Anthropic: console.anthropic.com
- OpenAI: platform.openai.com
- xAI (Grok): x.ai
- Google (Gemini): makersuite.google.com
- Brave Search: brave.com/search/api
- Serper: serper.dev
- Perplexity: perplexity.ai
- OpenRouter: openrouter.ai
Task Lifecycle
- Pending: Task created and queued
- Running: Task is being executed with live progress via
taskStatus()
- Completed: Task finished successfully
- Failed: Task encountered an error
- Cancelled: Task was cancelled by user
Tasks are automatically cleaned up after 24 hours.
CLI Usage
The task runner can also be used directly from the command line:
# Run as MCP server (for debugging)
ENV_FILE=~/.llm.env npx @just-every/mcp-task
# Or if installed globally
npm install -g @just-every/mcp-task
ENV_FILE=~/.llm.env mcp-task serve
Configuration
Task Timeout Settings
The server includes robust safety mechanisms to prevent tasks from getting stuck. All timeouts are configurable via environment variables:
# Default production settings (optimized for long-running tasks)
TASK_TIMEOUT=18000000 # 5 hours max runtime (default)
TASK_STUCK_THRESHOLD=300000 # 5 minutes inactivity = stuck (default)
TASK_HEALTH_CHECK_INTERVAL=60000 # Check every 1 minute (default)
# For shorter tasks, you might prefer:
TASK_TIMEOUT=300000 # 5 minutes max runtime
TASK_STUCK_THRESHOLD=60000 # 1 minute inactivity
TASK_HEALTH_CHECK_INTERVAL=15000 # Check every 15 seconds
# Add to your .llm.env or pass as environment variables
Safety Features:
- Automatic timeout: Tasks exceeding
TASK_TIMEOUT
are automatically failed - Inactivity detection: Tasks with no activity for
TASK_STUCK_THRESHOLD
are marked as stuck - Health monitoring: Regular checks every
TASK_HEALTH_CHECK_INTERVAL
ensure tasks are progressing - Error recovery: Uncaught exceptions and promise rejections are handled gracefully
Development
Setup
# Clone the repository
git clone https://github.com/just-every/mcp-task.git
cd mcp-task
# Install dependencies
npm install
# Build for production
npm run build
Development Mode
# Run in development mode with your env file
ENV_FILE=~/.llm.env npm run serve:dev
Testing
# Run tests
npm test
# Type checking
npm run typecheck
# Linting
npm run lint
Architecture
mcp-task/
├── src/
│ ├── serve.ts # MCP server implementation
│ ├── index.ts # CLI entry point
│ └── utils/
│ ├── task-manager.ts # Async task lifecycle management
│ └── logger.ts # Logging utilities
├── bin/
│ └── mcp-task.js # Executable entry
└── package.json
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
Troubleshooting
MCP Server Shows "Failed" in Claude
If you see "task ✘ failed" in Claude, check these common issues:
-
Missing API Keys: The most common issue is missing API keys. Check that your ENV_FILE is properly configured:
# Test if ENV_FILE is working ENV_FILE=/path/to/your/.llm.env npx @just-every/mcp-task
-
Incorrect Installation Command: Make sure you're using
-e
for environment variables:# Correct - environment variable passed with -e flag before -- claude mcp add task -s user -e ENV_FILE=$HOME/.llm.env -- npx -y @just-every/mcp-task # Incorrect - trying to pass as argument claude mcp add task -s user -- npx -y @just-every/mcp-task --env ENV_FILE=$HOME/.llm.env
-
Path Issues: ENV_FILE must use absolute paths:
# Good ENV_FILE=/Users/yourname/.llm.env ENV_FILE=$HOME/.llm.env # Bad ENV_FILE=.env ENV_FILE=~/.llm.env # ~ not expanded in some contexts
-
Verify Installation: Check your MCP configuration:
claude mcp list
-
Debug Mode: For detailed error messages, run manually:
ENV_FILE=/path/to/.llm.env npx @just-every/mcp-task
Task Not Progressing
- Check task status with
check_task_status
to see live progress - Look for error messages prefixed with "ERROR:" in the output
- Verify API keys are properly configured
Model Not Found
- Ensure model name is correctly spelled
- Check that required API keys are set for the model provider
- Popular models: claude-opus-4, grok-4, gemini-2.5-pro, o3
Task Cleanup
- Completed tasks are automatically cleaned up after 24 hours
- Use
list_tasks
to see all active and recent tasks - Cancel stuck tasks with
cancel_task
License
MIT
Author
Created by Just Every - Building powerful AI tools for developers.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.