WeChat MCP Server

WeChat MCP Server

Enables automation of WeChat on macOS through the Accessibility API, allowing LLMs to fetch recent messages from contacts and send replies based on conversation history.

Category
Visit Server

README

WeChat MCP Server

This project provides an MCP server that automates WeChat on macOS using the Accessibility API and screen capture. It exposes tools that LLMs can call to:

  • Fetch recent messages for a specific contact
  • Generate and send a reply to a contact based on recent history

Environment setup (using uv)

This project uses uv for dependency and environment management.

  1. Install uv (if not already installed):

    curl -LsSf https://astral.sh/uv/install.sh | sh
    
  2. From the project root, create/sync the environment:

    cd WeChat-MCP
    uv sync
    

    This will create a virtual environment (if needed) and install dependencies defined in pyproject.toml.

Add the MCP server to configuration

<details> <summary>Claude Code</summary>

claude mcp add --transport stdio wechat-mcp -- uv --directory $(pwd) run wechat-mcp

</details>

The MCP server entrypoint is wechat_mcp.mcp_server:main, exposed as the wechat-mcp console script.

Typical invocation:

uv run wechat-mcp --transport stdio

Supported transports:

  • stdio (default)
  • streamable-http (with --port, default 3001)
  • sse (with --port, default 3001)

Example:

uv run wechat-mcp --transport streamable-http --port 3001

Tools exposed to MCP clients

The server is implemented in src/wechat_mcp/mcp_server.py and defines two @mcp.tool() functions:

  • fetch_messages_by_contact(contact_name: str, last_n: int = 50) -> list[dict] Opens the chat for contact_name (first via the left session list, then via the search box if needed), then uses scrolling plus screenshots to collect the true last last_n messages, even if they span multiple screens of history. Each message is a JSON object:

    {
      "sender": "ME" | "OTHER" | "UNKNOWN",
      "text": "message text"
    }
    
  • reply_to_messages_by_contact(contact_name: str, reply_message: str | null = null, last_n: int = 50) -> dict Ensures the chat for contact_name is open (skipping an extra click when the current chat already matches), and (optionally) sends the provided reply_message using the Accessibility-based send_message helper. This tool is intended to be driven by the LLM that is already using this MCP: first call fetch_messages_by_contact, then compose a reply, then call this tool with that reply. Returns:

    {
      "contact_name": "The contact",
      "reply_message": "The message that was sent (or null)",
      "sent": true
    }
    

If an error occurs, the tools return an object containing an "error" field describing the issue.

Logging

The project has a comprehensive logging setup:

  • Logs are written to a rotating file under the logs/ directory (by default logs/wechat_mcp.log)
  • Logs are also sent to the terminal (stdout)

You can customize the log directory via:

  • WECHAT_MCP_LOG_DIR – directory path where .log files should be stored (defaults to logs under the current working directory)

macOS and Accessibility requirements

Because this project interacts with WeChat via the macOS Accessibility API:

  • WeChat must be running (com.tencent.xinWeChat)
  • The Python process (or the terminal app running it) must have Accessibility permissions enabled in System Settings → Privacy & Security → Accessibility

The helper scripts and MCP tools rely on:

  • Accessibility tree inspection to find chat lists, search fields, and message lists
  • Screen capture to classify message senders (ME vs OTHER vs UNKNOWN)
  • Synthetic keyboard events to search, focus inputs, and send messages

TODO

  • [x] Detect and switch to contact by clicking
  • [x] Scroll to get full/more history messages

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured