Grok CLI MCP Server

Grok CLI MCP Server

Provides seamless access to Grok AI models through the Model Context Protocol by wrapping the official Grok CLI, offering tools for general queries, multi-turn conversations, and code generation.

Category
Visit Server

README

grok-cli-mcp

PyPI version Python 3.10+ License: MIT

MCP server that wraps the Grok CLI, providing seamless access to Grok AI models through the Model Context Protocol.

What is this?

grok-cli-mcp is a Model Context Protocol (MCP) server that acts as a bridge between MCP clients (like Claude Code, Cline, Cursor) and the Grok CLI. Instead of implementing direct API calls, it leverages the official Grok CLI tool, providing:

  • Three specialized tools: grok_query (general queries), grok_chat (multi-turn conversations), grok_code (code generation)
  • Simple configuration: Just install the Grok CLI and set your API key
  • Future-proof: Automatically benefits from CLI improvements (OAuth, pricing plans, etc.)
  • Minimal maintenance: No need to track Grok API changes

Why a CLI Wrapper?

Benefits

Leverage existing tooling: Uses the official Grok CLI, ensuring compatibility and stability

Future OAuth support: When Grok CLI adds OAuth authentication, this wrapper will support it automatically without code changes

Fixed pricing plans: Can benefit from fixed monthly pricing (like Codex/ChatGPT/Gemini) when Grok introduces CLI-specific plans, rather than paying per API token

Organization-friendly: Many organizations prefer audited CLI tools over direct API integrations for security and compliance

Simpler codebase: ~400 lines vs 1500+ for a full API client implementation

Fewer dependencies: No HTTP client libraries, request/response handling, or complex networking code

Automatic updates: CLI bug fixes and new features propagate without code changes

Tradeoffs

⚠️ Performance overhead: Extra process spawning adds ~50-200ms latency per request

⚠️ CLI dependency: Requires Grok CLI to be installed and in PATH

⚠️ Limited control: Can't access low-level API features not exposed by CLI

⚠️ Error handling: CLI error messages may be less structured than API responses

⚠️ No streaming: Limited to CLI streaming capabilities (if any)

When to use this

Perfect for:

  • Development and prototyping workflows
  • Internal tools and automation (<100 req/min)
  • Organizations preferring CLI tools over API libraries
  • Workflows where convenience matters more than milliseconds
  • Teams wanting to benefit from future CLI-specific pricing/features

Consider direct API for:

  • High-throughput production systems (>1000 req/min)
  • Latency-critical applications (<50ms requirements)
  • Advanced API features not exposed by CLI
  • Streaming response requirements

Prerequisites

Before installing grok-cli-mcp, ensure you have:

  1. Grok CLI: Install from X.AI's documentation

    # Installation instructions vary by platform
    # See https://docs.x.ai/docs for latest instructions
    
  2. Python 3.10+: Check your version

    python3 --version
    
  3. Grok API Key: Obtain from X.AI console

Installation

Option 1: Install from PyPI (Recommended)

pip install grok-cli-mcp

Option 2: Install with uv

uv pip install grok-cli-mcp

Option 3: Install with pipx (isolated environment)

pipx install grok-cli-mcp

Option 4: Install from source

git clone https://github.com/BasisSetVentures/grok-cli-mcp.git
cd grok-cli-mcp
pip install -e .

Option 5: Development installation

git clone https://github.com/BasisSetVentures/grok-cli-mcp.git
cd grok-cli-mcp
pip install -e ".[dev]"

Quick Start

1. Set up your environment

# Required: Set your Grok API key
export GROK_API_KEY="your-api-key-here"

# Optional: Specify custom Grok CLI path
export GROK_CLI_PATH="/custom/path/to/grok"

For permanent setup, add to your shell profile (~/.bashrc, ~/.zshrc, etc.):

echo 'export GROK_API_KEY="your-api-key-here"' >> ~/.bashrc
source ~/.bashrc

2. Test the server

# Run the server directly
python -m grok_cli_mcp

# Or use the command
grok-mcp

# Should start and wait for stdin (Ctrl+C to exit)

3. Configure for MCP clients

For Claude Code

Add to your .mcp.json:

{
  "mcpServers": {
    "grok": {
      "type": "stdio",
      "command": "python",
      "args": ["-m", "grok_cli_mcp"],
      "env": {
        "GROK_API_KEY": "your-api-key-here"
      }
    }
  }
}

For Cline (VS Code)

Add to ~/.cline/mcp_settings.json:

{
  "mcpServers": {
    "grok": {
      "command": "python",
      "args": ["-m", "grok_cli_mcp"],
      "env": {
        "GROK_API_KEY": "your-api-key-here"
      }
    }
  }
}

For Cursor

Add to ~/.cursor/mcp.json:

{
  "grok": {
    "command": "python",
    "args": ["-m", "grok_cli_mcp"],
    "env": {
      "GROK_API_KEY": "your-api-key-here"
    }
  }
}

⚠️ Security Warning: Never commit API keys to version control. Use environment variables or a secrets manager.

Usage Examples

Tool: grok_query

Send a simple prompt to Grok:

{
  "tool": "grok_query",
  "arguments": {
    "prompt": "Explain quantum computing in simple terms",
    "model": "grok-code-fast-1",
    "timeout_s": 120
  }
}

Response: Plain text answer from Grok

Tool: grok_chat

Multi-turn conversation with message history:

{
  "tool": "grok_chat",
  "arguments": {
    "messages": [
      {"role": "user", "content": "What is MCP?"},
      {"role": "assistant", "content": "MCP is Model Context Protocol..."},
      {"role": "user", "content": "How does it work?"}
    ],
    "model": "grok-code-fast-1",
    "timeout_s": 120
  }
}

Response: Grok's answer considering the conversation history

Tool: grok_code

Code generation with language hints and context:

{
  "tool": "grok_code",
  "arguments": {
    "task": "Create a Python function to parse JSON with error handling",
    "language": "python",
    "context": "Using standard library only, no external dependencies",
    "timeout_s": 180
  }
}

Response: Complete, usable Python code with explanations

Advanced: Raw Output Mode

Get structured response with full details:

{
  "tool": "grok_query",
  "arguments": {
    "prompt": "Explain async/await",
    "raw_output": true
  }
}

Response:

{
  "text": "Async/await is...",
  "messages": [{"role": "assistant", "content": "..."}],
  "raw": "...",
  "model": "grok-code-fast-1"
}

Configuration

Environment Variables

Variable Required Default Description
GROK_API_KEY Yes - Your Grok API key from X.AI console
GROK_CLI_PATH No /opt/homebrew/bin/grok Path to Grok CLI binary

Model Selection

Available models (as of 2025-12):

  • grok-code-fast-1 - Fast model for code tasks
  • grok-2 - Main model for general tasks
  • Other models per Grok CLI documentation

Specify model in each tool call or omit for CLI default.

Timeout Configuration

Default timeouts by tool:

  • grok_query: 120 seconds
  • grok_chat: 120 seconds
  • grok_code: 180 seconds

Adjust via timeout_s parameter for complex tasks.

Troubleshooting

"Grok CLI not found"

Problem: Server can't locate the Grok CLI binary

Solutions:

  1. Verify installation:
    which grok
    
  2. Set explicit path:
    export GROK_CLI_PATH="/path/to/grok"
    
  3. Add to PATH:
    export PATH="$PATH:/opt/homebrew/bin"
    

"GROK_API_KEY is not set"

Problem: API key not in environment

Solutions:

  1. Export in shell:
    export GROK_API_KEY="xai-..."
    
  2. Add to shell profile (.bashrc, .zshrc):
    echo 'export GROK_API_KEY="xai-..."' >> ~/.zshrc
    source ~/.zshrc
    
  3. Use .env file with python-dotenv (see examples/.env.example)

"Grok CLI timed out"

Problem: Request took too long

Solutions:

  1. Increase timeout:
    {"timeout_s": 300}
    
  2. Simplify prompt or break into smaller requests
  3. Check network connectivity

JSON parsing errors

Problem: CLI output isn't valid JSON

Solutions:

  1. Update Grok CLI to latest version:
    # Update instructions vary by installation method
    
  2. Check for CLI warnings/errors
  3. Use raw_output=true to see raw CLI response:
    {"raw_output": true}
    

Permission errors

Problem: Can't execute Grok CLI

Solutions:

  1. Make CLI executable:
    chmod +x /path/to/grok
    
  2. Check file ownership and permissions
  3. Verify CLI works standalone:
    grok -p "test"
    

For more solutions, see docs/troubleshooting.md.

Security Best Practices

Never Commit Secrets

❌ DO NOT:

  • Commit .env files with real API keys
  • Include API keys in .mcp.json tracked by git
  • Share API keys in issues or pull requests
  • Hardcode keys in Python files

✅ DO:

  • Use environment variables: export GROK_API_KEY="..."
  • Use shell RC files: ~/.bashrc, ~/.zshrc
  • Use secrets managers in production: AWS Secrets Manager, HashiCorp Vault
  • Rotate keys immediately if accidentally exposed

Obtaining API Keys

  1. Visit X.AI Console
  2. Sign in with your X.AI account
  3. Navigate to API Keys section
  4. Generate a new key
  5. Store securely (1Password, Bitwarden, etc.)
  6. Set as environment variable

Key Rotation

If you accidentally expose your API key:

  1. Immediately revoke the key in X.AI console
  2. Generate a new key
  3. Update environment variables
  4. Check git history for exposed keys
  5. Consider using tools like gitleaks to scan for secrets

Reporting Security Issues

Do NOT open public issues for security vulnerabilities.

Please report security concerns responsibly through GitHub Security Advisories or by contacting the maintainers directly.

Architecture & Design

This project follows a CLI wrapper pattern rather than direct API integration. Key design decisions:

  1. Process isolation: Each Grok request spawns a subprocess for CLI execution
  2. JSON parsing with fallback: Attempts structured parsing, falls back to raw output
  3. Context propagation: Uses FastMCP's Context for logging and progress updates
  4. Async execution: All operations are async-first for non-blocking behavior

For detailed architecture discussion, see docs/architecture.md.

Development

Running tests

# Install dev dependencies
pip install -e ".[dev]"

# Run all tests
pytest

# Run with coverage
pytest --cov=grok_cli_mcp --cov-report=html

# Run specific test file
pytest tests/test_utils.py

Code formatting

# Format code
black .

# Lint code
ruff check --fix .

Type checking

mypy src/

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Please ensure:

  • Tests pass (pytest)
  • Code is formatted (black, ruff)
  • Type hints are correct (mypy)
  • Documentation is updated

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Support


Made by Basis Set Ventures with Claude Code and FastMCP

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
E2B

E2B

Using MCP to run code via e2b.

Official
Featured