🚀 ⚡️ k6-mcp-server
Mirror of
MCP-Mirror
README
🚀 ⚡️ k6-mcp-server
A Model Context Protocol (MCP) server implementation for running k6 load tests.
✨ Features
- Simple integration with Model Context Protocol framework
- Support for custom test durations and virtual users (VUs)
- Easy-to-use API for running k6 load tests
- Configurable through environment variables
- Real-time test execution output
🔧 Prerequisites
Before you begin, ensure you have the following installed:
- Python 3.12 or higher
- k6 load testing tool (Installation guide)
- uv package manager (Installation guide)
📦 Installation
- Clone the repository:
git clone https://github.com/qainsights/k6-mcp-server.git
- Install the required dependencies:
uv pip install -r requirements.txt
- Set up environment variables (optional):
Create a
.env
file in the project root:
K6_BIN=/path/to/k6 # Optional: defaults to 'k6' in system PATH
🚀 Getting Started
- Create a k6 test script (e.g.,
test.js
):
import http from "k6/http";
import { sleep } from "k6";
export default function () {
http.get("http://test.k6.io");
sleep(1);
}
- Configure the MCP server using the below specs in your favorite MCP client (Claude Desktop, Cursor, Windsurf and more):
{
"mcpServers": {
"k6": {
"command": "/path/to/bin/uv",
"args": [
"--directory",
"/path/to/k6-mcp-server",
"run",
"k6_server.py"
]
}
}
}
- Now ask the LLM to run the test e.g.
run k6 test for hello.js
. The k6 mcp server will leverage either one of the below tools to start the test.
execute_k6_test
: Run a test with default options (30s duration, 10 VUs)execute_k6_test_with_options
: Run a test with custom duration and VUs
📝 API Reference
Execute K6 Test
execute_k6_test(
script_file: str,
duration: str = "30s", # Optional
vus: int = 10 # Optional
)
Execute K6 Test with Custom Options
execute_k6_test_with_options(
script_file: str,
duration: str,
vus: int
)
✨ Use cases
- LLM powered results analysis
- Effective debugging of load tests
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
MCP Package Docs Server
Facilitates LLMs to efficiently access and fetch structured documentation for packages in Go, Python, and NPM, enhancing software development with multi-language support and performance optimization.
Claude Code MCP
An implementation of Claude Code as a Model Context Protocol server that enables using Claude's software engineering capabilities (code generation, editing, reviewing, and file operations) through the standardized MCP interface.
@kazuph/mcp-taskmanager
Model Context Protocol server for Task Management. This allows Claude Desktop (or any MCP client) to manage and execute tasks in a queue-based system.
Linear MCP Server
Enables interaction with Linear's API for managing issues, teams, and projects programmatically through the Model Context Protocol.
mermaid-mcp-server
A Model Context Protocol (MCP) server that converts Mermaid diagrams to PNG images.
Jira-Context-MCP
MCP server to provide Jira Tickets information to AI coding agents like Cursor

Linear MCP Server
A Model Context Protocol server that integrates with Linear's issue tracking system, allowing LLMs to create, update, search, and comment on Linear issues through natural language interactions.

Sequential Thinking MCP Server
This server facilitates structured problem-solving by breaking down complex issues into sequential steps, supporting revisions, and enabling multiple solution paths through full MCP integration.