Grok MCP
Use XAI's latest api functionalities with Grok MCP. It supports image understanding and generation, live search, latest models and more.
README
Grok-MCP
A MCP server for xAI's Grok API, providing access to capabilities including image understanding, image generation, live web search, and reasoning models.
<a href="https://glama.ai/mcp/servers/@merterbak/Grok-MCP"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@merterbak/Grok-MCP/badge" /> </a>
🚀 Features
- Multiple Grok Models: Access to Grok-4, Grok-4-Fast, Grok-3-Mini, and more
- Image Generation: Create images using Grok's image generation models
- Vision Capabilities: Analyze images with Grok's vision models
- Live Web Search: Real-time web search with citations from news, web, X, and RSS feeds
- Reasoning Models: Advanced reasoning with extended thinking models (Grok-3-Mini, Grok-4)
- Stateful Conversations: Use this nrewly released feature to maintain conversation context as id across multiple requests
- Conversation History: Built-in support for multi-turn conversations
📋 Prerequisites
- Python 3.11 or higher
- xAI API key (Get one here)
uvpackage manager
🛠️ Installation
- Clone the repository:
git clone https://github.com/merterbak/Grok-MCP.git
cd Grok-MCP
- Install dependencies using
uv:
uv sync
🔧 Configuration
Claude Desktop Integration
Add this to your Claude Desktop configuration file:
{
"mcpServers": {
"grok": {
"command": "uv",
"args": [
"--directory",
"/path/to/Grok-MCP",
"run",
"python",
"main.py"
],
"env": {
"XAI_API_KEY": "your_api_key_here"
}
}
}
}
Usage
For stdio:
uv run python main.py
📚 Available Tools
1. list_models
List all available Grok models with creation dates and ownership information.
2. chat
Standard chat completion with extensive customization options.
Parameters:
prompt(required): Your messagemodel: Model to use (default: "grok-4-fast")system_prompt: Optional system instructionuse_conversation_history: Enable multi-turn conversationstemperature,max_tokens,top_p: Generation parameterspresence_penalty,frequency_penalty,stop: Advanced controlreasoning_effort: For reasoning models ("low" or "high")
3. chat_with_reasoning
Get detailed reasoning along with the response.
Parameters:
prompt(required): Your question or taskmodel: "grok-4", "grok-3-mini", or "grok-3-mini-fast"reasoning_effort: "low" or "high" (not for grok-4)system_prompt,temperature,max_tokens,top_p
Returns: Content, reasoning content, and usage statistics
4. chat_with_vision
Analyze images with natural language queries.
Parameters:
prompt(required): Your question about the image(s)image_paths: List of local image file pathsimage_urls: List of image URLsdetail: "auto", "low", or "high"model: Vision-capable model (default: "grok-4-0709")
Supported formats: JPG, JPEG, PNG
5. generate_image
Create images from text descriptions.
Parameters:
prompt(required): Image descriptionn: Number of images to generate (default: 1)response_format: "url" or "b64_json"model: Image generation model (default: "grok-2-image-1212")
Returns: Generated images and revised prompt
6. live_search
Search the web in real-time with source citations.
Parameters:
prompt(required): Your search querymodel: Model to use (default: "grok-4")mode: "on" or "off"return_citations: Include source citations (default: true)from_date,to_date: Date range (YYYY-MM-DD)max_search_results: Max results to fetch (default: 20)country: Country code for localized searchrss_links: List of RSS feed URLs to searchsources: Custom source configuration
Returns: Content, citations, usage stats, and number of sources used
7. stateful_chat
Maintain conversation state across multiple requests on xAI servers.
Parameters:
prompt(required): Your messageresponse_id: Previous response ID to continue conversationmodel: Model to use (default: "grok-4")system_prompt: System instruction (only for new conversations)include_reasoning: Include reasoning summarytemperature,max_tokens
Returns: Response with ID for continuing the conversation (stored for 30 days)
8. retrieve_stateful_response
Retrieve a previously stored conversation response.
Parameters:
response_id(required): The response ID to retrieve
9. delete_stateful_response
Delete a stored conversation from xAI servers.
Parameters:
response_id(required): The response ID to delete
Roadmap
- add docker support
- fix chat vision model tool
📄 License
This project is open source and available under the MIT License.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.