mma-mcp
An MCP server that connects AI assistants to a local Wolfram Engine, enabling symbolic math, numerical analysis, and data visualization through Wolfram Language. It provides secure expression filtering, client authentication, and supports both local stdio and HTTP transports.
README
mma-mcp
A Model Context Protocol (MCP) server that wraps a local Wolfram Engine, enabling AI assistants (Claude, ChatGPT, etc.) to perform symbolic math, numerical analysis, and data visualization via Wolfram Language.
Disclaimer: This is an unofficial, independent, personal project. It is not affiliated with, sponsored by, endorsed by, or certified by Wolfram Research, Inc. "Wolfram", "Wolfram Language", "Wolfram Engine", "Mathematica", and related marks are trademarks of Wolfram Research.
This software does not include any Wolfram Engine / Mathematica binaries, activation keys, license files, or other proprietary materials. Users must independently obtain and properly license their own copy of the Wolfram Engine or Mathematica in accordance with Wolfram's licensing terms.
The sole purpose of this project is to allow a licensed individual to invoke their own, locally-installed Wolfram kernel through AI assistants on their own machine, within the scope permitted by their license. Redistribution of Wolfram Engine access to third parties is not an intended use case and may violate Wolfram's licensing terms.
Features
- MCP Tools:
evaluate(text) andevaluate_image(PNG, experimental) — all Wolfram Language capabilities through two universal tools - Transports: stdio (local) and Streamable HTTP
- Security: Pre-kernel expression filtering with blacklist/whitelist modes and 29 capability groups
- Client RBAC: Per-client credentials, per-role tool and security policy control — for isolating different AI clients on the same machine
- OAuth 2.1: Authorization server for web-based MCP clients (Claude.ai, ChatGPT)
- Config-driven: Single TOML file controls all behavior
Prerequisites
- Python 3.11+
- Wolfram Engine or Mathematica (properly licensed)
- uv package manager
Quick Start
# Clone and install
git clone https://github.com/siqiliu-tsinghua/mma-mcp.git
cd mma-mcp
uv sync
# Graphics export dependencies (headless servers only — desktops already have these)
sudo apt-get install -y libfontconfig1 libgl1 libasound2t64 libxkbcommon0 libegl1
# Generate default config
uv run mma-mcp init
# Generate security group files (requires Wolfram kernel, ~1 min)
uv run mma-mcp setup
# Start server (stdio, for local MCP clients)
uv run mma-mcp serve
Client Configuration
Claude Code / VS Code (stdio)
Add to your .mcp.json:
{
"mcpServers": {
"mma-mcp": {
"command": "uv",
"args": ["--directory", "/path/to/mma-mcp", "run", "mma-mcp"]
}
}
}
Claude Desktop (stdio)
Add to your claude_desktop_config.json (Settings -> Developer -> Edit Config):
{
"mcpServers": {
"mma-mcp": {
"command": "/path/to/mma-mcp/.venv/bin/mma-mcp"
}
}
}
On macOS/Linux, find the config at
~/Library/Application Support/Claude/claude_desktop_config.jsonor~/.config/Claude/claude_desktop_config.json.
HTTP Transport
uv run mma-mcp serve --transport http --host 127.0.0.1 --port 8000
Configuration
All settings live in mma_mcp.toml (or pyproject.toml under [tool.mma-mcp]).
uv run mma-mcp init # generates mma_mcp.toml with comments
Key sections:
| Section | Description |
|---|---|
[kernel] |
Wolfram kernel path, timeout, output format |
[server] |
Transport mode, host, port |
[security] |
Blacklist/whitelist mode, capability groups |
[tools] |
Which MCP tools to expose |
[tls] |
Domain and DNS provider for HTTPS (Caddy) |
[auth] |
Client identity and role-based access control |
Security
Expressions are filtered before reaching the Wolfram kernel. Symbols are extracted via regex and checked against the active policy.
Blacklist mode (default): blocks dangerous groups (system_exec, file I/O, networking, dynamic eval).
Whitelist mode: only allows symbols from explicitly enabled groups.
29 capability groups (22 safe + 7 dangerous) cover ~6000 Wolfram Language symbols. Regenerate from your local kernel:
uv run mma-mcp setup # required after cloning (generates from your local kernel)
uv run mma-mcp setup --force # force regeneration (e.g., after Wolfram Engine upgrade)
Client Identity & Roles
When using HTTP transport, you can configure per-client credentials and roles to isolate different AI clients (e.g., Claude and ChatGPT) connecting to the same kernel:
# Generate password hash
uv run mma-mcp hash-password
# Generate TOML snippet for a new client
uv run mma-mcp add-client alice --role admin
Each client is bound to a role that controls which tools it can access, which Wolfram symbols it can use, and resource limits (timeout, result size). Concurrent clients are isolated via a kernel worker pool — each tool call runs in an exclusive kernel process with a temporary WL context.
See the [auth] section in mma_mcp.toml for configuration details.
Development
# Run tests
uv run pytest tests/ -v
# Inspect MCP tools interactively
uv run mcp dev src/mma_mcp/server.py
CLI Commands
| Command | Description |
|---|---|
mma-mcp serve |
Start the MCP server (default) |
mma-mcp init |
Generate default mma_mcp.toml |
mma-mcp setup |
Generate security group JSONs from local kernel |
mma-mcp caddyfile |
Generate Caddyfile for HTTPS |
mma-mcp hash-password |
Hash a password for config |
mma-mcp add-client |
Generate TOML snippet for a new AI client |
Client Compatibility
| Client | Long computations | Notes |
|---|---|---|
| Claude.ai | ✔ Supported | Sends progressToken; server heartbeat keeps connection alive |
| ChatGPT | ✘ May timeout | Does not send progressToken; has a hard timeout (~60s) independent of server heartbeat |
| Claude Desktop / Claude Code | Not tested | Local stdio transport |
License
MIT — applies only to the code in this repository. Use of Wolfram Engine / Mathematica is governed by Wolfram Research's own license terms.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.