Task Manager MCP Server

Task Manager MCP Server

A Model Context Protocol server providing comprehensive task management capabilities with support for project organization, task tracking, and automatic PRD parsing into actionable items.

Category
Visit Server

README

Task Manager MCP Server

A template implementation of the Model Context Protocol (MCP) server for managing tasks and projects. This server provides a comprehensive task management system with support for project organization, task tracking, and PRD parsing.

Overview

This project demonstrates how to build an MCP server that enables AI agents to manage tasks, track project progress, and break down Product Requirements Documents (PRDs) into actionable tasks. It serves as a practical template for creating your own MCP servers with task management capabilities.

The implementation follows the best practices laid out by Anthropic for building MCP servers, allowing seamless integration with any MCP-compatible client.

Features

The server provides several essential task management tools:

  1. Task Management

    • create_task_file: Create new project task files
    • add_task: Add tasks to projects with descriptions and subtasks
    • update_task_status: Update the status of tasks and subtasks
    • get_next_task: Get the next uncompleted task from a project
  2. Project Planning

    • parse_prd: Convert PRDs into structured tasks automatically
    • expand_task: Break down tasks into smaller, manageable subtasks
    • estimate_task_complexity: Estimate task complexity and time requirements
    • get_task_dependencies: Track task dependencies
  3. Development Support

    • generate_task_file: Generate file templates based on task descriptions
    • suggest_next_actions: Get AI-powered suggestions for next steps

Prerequisites

  • Python 3.12+
  • API keys for your chosen LLM provider (OpenAI, OpenRouter, or Ollama)
  • Docker if running the MCP server as a container (recommended)

Installation

Using uv

  1. Install uv if you don't have it:

    pip install uv
    
  2. Clone this repository:

    git clone https://github.com/coleam00/mcp-mem0.git
    cd mcp-mem0
    
  3. Install dependencies:

    uv pip install -e .
    
  4. Create a .env file based on .env.example:

    cp .env.example .env
    
  5. Configure your environment variables in the .env file (see Configuration section)

Using Docker (Recommended)

  1. Build the Docker image:

    docker build -t mcp/mem0 --build-arg PORT=8050 .
    
  2. Create a .env file based on .env.example and configure your environment variables

Configuration

The following environment variables can be configured in your .env file:

Variable Description Example
TRANSPORT Transport protocol (sse or stdio) sse
HOST Host to bind to when using SSE transport 0.0.0.0
PORT Port to listen on when using SSE transport 8050
LLM_PROVIDER LLM provider (openai, openrouter, or ollama) openai
LLM_BASE_URL Base URL for the LLM API https://api.openai.com/v1
LLM_API_KEY API key for the LLM provider sk-...
LLM_CHOICE LLM model to use for task analysis gpt-4

Running the Server

Using Python 3

# Set TRANSPORT=sse in .env then:
python3 src/main.py

The server will start on the configured host and port (default: http://0.0.0.0:8050).

Using Docker

docker build -t task-manager-mcp .
docker run --env-file .env -p 8050:8050 task-manager-mcp

Using the Task Manager

Creating a New Project

  1. Create a task file for your project:
await mcp.create_task_file(project_name="my-project")
  1. Add tasks to your project:
await mcp.add_task(
    project_name="my-project",
    title="Setup Development Environment",
    description="Configure the development environment with required tools",
    subtasks=[
        "Install dependencies",
        "Configure linters",
        "Set up testing framework"
    ]
)
  1. Parse a PRD to create tasks automatically:
await mcp.parse_prd(
    project_name="my-project",
    prd_content="# Your PRD content..."
)

Managing Tasks

  1. Update task status:
await mcp.update_task_status(
    project_name="my-project",
    task_title="Setup Development Environment",
    subtask_title="Install dependencies",
    status="done"
)
  1. Get the next task to work on:
next_task = await mcp.get_next_task(project_name="my-project")
  1. Expand a task into subtasks:
await mcp.expand_task(
    project_name="my-project",
    task_title="Implement Authentication"
)

Development Workflow

  1. Generate a file template for a task:
await mcp.generate_task_file(
    project_name="my-project",
    task_title="User Authentication"
)
  1. Get task complexity estimate:
complexity = await mcp.estimate_task_complexity(
    project_name="my-project",
    task_title="User Authentication"
)
  1. Get suggestions for next actions:
suggestions = await mcp.suggest_next_actions(
    project_name="my-project",
    task_title="User Authentication"
)

Integration with MCP Clients

SSE Configuration

To connect to the server using SSE transport, use this configuration:

{
  "mcpServers": {
    "task-manager": {
      "transport": "sse",
      "url": "http://localhost:8050/sse"
    }
  }
}

Stdio Configuration

For stdio transport, use this configuration:

{
  "mcpServers": {
    "task-manager": {
      "command": "python3",
      "args": ["src/main.py"],
      "env": {
        "TRANSPORT": "stdio",
        "LLM_PROVIDER": "openai",
        "LLM_API_KEY": "YOUR-API-KEY",
        "LLM_CHOICE": "gpt-4"
      }
    }
  }
}

Building Your Own Server

This template provides a foundation for building more complex task management MCP servers. To extend it:

  1. Add new task management tools using the @mcp.tool() decorator
  2. Implement custom task analysis and automation features
  3. Add project-specific task templates and workflows
  4. Integrate with your existing development tools and processes

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured