Vectra AI MCP Server
Connects AI assistants to the Vectra AI security platform to enable intelligent analysis of threat detection data and automated incident response workflows. It allows users to investigate threats, take response actions, and generate security reports using natural language.
README
Vectra AI MCP Server
This project implements an MCP server for the Vectra AI Platform.
What is Vectra AI MCP?
An MCP server that connects AI assistants to your Vectra AI security platform, enabling intelligent analysis of threat detection data, security insights, and automated incident response workflows. Compatible with Claude, ChatGPT, Cursor, VS Code and other MCP-enabled AI tools.
<p align="center"> <img src="assets/mcp-diagram.png" alt="mcp-diagram" width="60%" align="center"/> </p>
What can you do with Vectra AI MCP?
- Investigate threats in natural language
- Take response actions in Vectra directly from your AI agent
- Correlate and analyze security data using prompts
- Dynamically build advanced visulizations for analysis
- Generate investigation reports from natural language
Setup - Host Locally
Prerequisites
-
Install Python Check .python-version file for the required version
-
Install uv - Python package manager
# On macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or via pip
pip install uv
Setup Steps
- Clone/Download the project to your local machine
- Navigate to the project directory:
cd your-project-directory
- Configure environment variables:
# Copy the example environment file
cp .env.example .env
Then edit the .env file with your actual Vectra AI Platform credentials. Required variables to update:
VECTRA_BASE_URL: Your Vectra portal URLVECTRA_CLIENT_ID: Your client ID from VectraVECTRA_CLIENT_SECRET: Your client secret from Vectra
- Create and activate a virtual environment:
uv venv
# Activate it:
# On macOS/Linux:
source .venv/bin/activate
# On Windows:
.venv\Scripts\activate
- Install dependencies:
uv sync
This will install all dependencies specified in pyproject.toml using the exact versions from uv.lock.
- Run the application:
The server supports multiple transport protocols:
# Run with stdio transport (default, for Claude Desktop)
python server.py
python server.py --transport stdio
# Run with SSE transport (for HTTP-based MCP clients)
python server.py --transport sse --host 0.0.0.0 --port 8000
# Run with streamable-http transport (for production HTTP deployments)
python server.py --transport streamable-http --host 0.0.0.0 --port 8000
# Enable debug logging
python server.py --debug
Transport Options:
- stdio: Standard input/output communication (default, used by Claude Desktop)
- sse: Server-Sent Events over HTTP (good for web-based clients)
- streamable-http: Streamable HTTP transport (recommended for production HTTP deployments)
Environment Variables: You can also configure the server using environment variables:
export VECTRA_MCP_TRANSPORT=streamable-http
export VECTRA_MCP_HOST=0.0.0.0
export VECTRA_MCP_PORT=8000
export VECTRA_MCP_DEBUG=true
python server.py
MCP Configuration for Claude Desktop
- Add MCP Server to Claude Desktop:
# On macOS:
# Open Claude Desktop configuration file
code ~/Library/Application\ Support/Claude/claude_desktop_config.json
# On Windows:
# Open Claude Desktop configuration file
notepad %APPDATA%/Claude/claude_desktop_config.json
Add the following configuration to the mcpServers section (update the paths to match your setup):
{
"mcpServers": {
"vectra-ai-mcp": {
"command": "/path/to/your/uv/binary",
"args": [
"--directory",
"/path/to/your/project/directory",
"run",
"server.py"
]
}
}
}
Example with actual paths:
{
"mcpServers": {
"vectra-ai-mcp": {
"command": "/Users/yourusername/.local/bin/uv",
"args": [
"--directory",
"/Users/yourusername/path/to/vectra-mcp-project",
"run",
"server.py"
]
}
}
}
- Debug - Find your uv installation path:
# Find where uv is installed
which uv
# or
where uv
- Debug - Get your project's absolute path:
# From your project directory, run:
pwd
- Restart Claude Desktop to load the new MCP server configuration.
Other MCP Client Setup
Once configured, you should be able to use Vectra AI Platform capabilities directly within Claude Desktop or other MCP clients through this MCP server!
For other MCP clients besides Claude Desktop, refer to the documentation links below:
| MCP Client | Documentation Link |
|---|---|
| General MCP Setup | https://modelcontextprotocol.io/quickstart/user |
| Cursor | https://docs.cursor.com/en/context/mcp#using-mcp-json |
| VS Code | https://code.visualstudio.com/docs/copilot/chat/mcp-servers#_add-an-mcp-server |
For other MCP clients, refer to their respective documentation. The general pattern is similar - you'll need to specify the command and arguments to run the MCP server with the same configuration structure.
Setup - Docker Deployment
For production deployments or easier setup, you can run the Vectra AI MCP Server using Docker. We provide two options:
Option 1: Using Pre-built Images (Recommended)
The easiest way to get started is using our pre-built Docker images from GitHub Container Registry.
Prerequisites
- Docker Desktop or Docker Engine
Quick Start Steps
- Configure environment variables:
# Copy the example environment file
cp .env.example .env
Then edit the .env file with your actual Vectra AI Platform credentials.
- Run with pre-built image:
Streamable HTTP Transport (Recommended for Production)
docker run -d \
--name vectra-mcp-server-http \
--env-file .env \
-e VECTRA_MCP_TRANSPORT=streamable-http \
-e VECTRA_MCP_HOST=0.0.0.0 \
-e VECTRA_MCP_PORT=8000 \
-p 8000:8000 \
--restart unless-stopped \
ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest
SSE Transport (Server-Sent Events)
docker run -d \
--name vectra-mcp-server-sse \
--env-file .env \
-e VECTRA_MCP_TRANSPORT=sse \
-e VECTRA_MCP_HOST=0.0.0.0 \
-e VECTRA_MCP_PORT=8000 \
-p 8000:8000 \
--restart unless-stopped \
ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest
Stdio Transport (For Local MCP Clients)
docker run -d \
--name vectra-mcp-server-stdio \
--env-file .env \
-e VECTRA_MCP_TRANSPORT=stdio \
--restart unless-stopped \
ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest
- Or use Docker Compose (Alternative):
Create a docker-compose.yml file:
version: '3.8'
services:
vectra-mcp-server:
image: ghcr.io/vectra-ai-research/vectra-ai-mcp-server:latest
container_name: vectra-mcp-server
env_file: .env
environment:
- VECTRA_MCP_TRANSPORT=streamable-http
- VECTRA_MCP_HOST=0.0.0.0
- VECTRA_MCP_PORT=8000
ports:
- "8000:8000"
restart: unless-stopped
Then run:
docker-compose up -d
Available Tags:
latest: Latest stable build from main branchmain: Latest build from main branch (same as latest)v*: Specific version tags (e.g., v1.0.0)
💡 Tip: Pre-built images are automatically built and published via GitHub Actions whenever code is pushed to the main branch or when releases are tagged. This ensures you always get the latest tested version without needing to build locally.
Option 2: Build from Source
For development or customization, you can build the Docker image from source.
Prerequisites
- Install Docker and Docker Compose
- Docker Desktop (includes Docker Compose)
- Or install Docker Engine and Docker Compose separately on Linux
Build from Source Steps
- Clone/Download the project to your local machine
- Navigate to the project directory:
cd your-project-directory
- Configure environment variables:
# Copy the example environment file
cp .env.example .env
Then edit the .env file with your actual Vectra AI Platform credentials.
- Build and run with Docker:
# Build the image
docker build -t vectra-mcp-server .
- Run the locally built image:
Choose your transport mode and run with the locally built image:
Streamable HTTP Transport
docker run -d \
--name vectra-mcp-server-http \
--env-file .env \
-e VECTRA_MCP_TRANSPORT=streamable-http \
-e VECTRA_MCP_HOST=0.0.0.0 \
-e VECTRA_MCP_PORT=8000 \
-p 8000:8000 \
--restart unless-stopped \
vectra-mcp-server
SSE Transport
docker run -d \
--name vectra-mcp-server-sse \
--env-file .env \
-e VECTRA_MCP_TRANSPORT=sse \
-e VECTRA_MCP_HOST=0.0.0.0 \
-e VECTRA_MCP_PORT=8000 \
-p 8000:8000 \
--restart unless-stopped \
vectra-mcp-server
Stdio Transport
docker run -d \
--name vectra-mcp-server-stdio \
--env-file .env \
-e VECTRA_MCP_TRANSPORT=stdio \
--restart unless-stopped \
vectra-mcp-server
Docker Environment Variables
The Docker container supports all the same environment variables as the local setup, plus additional MCP server configuration:
MCP Server Configuration
VECTRA_MCP_TRANSPORT: Transport protocol (stdio,sse, orstreamable-http) - default:stdioVECTRA_MCP_HOST: Host to bind to for HTTP transports - default:0.0.0.0VECTRA_MCP_PORT: Port for HTTP transports - default:8000VECTRA_MCP_DEBUG: Enable debug logging - default:false
Accessing the HTTP Server
When running with HTTP transports (sse or streamable-http), the MCP server will be available at:
- Streamable HTTP:
http://localhost:8000/mcp - SSE:
http://localhost:8000/sse
MCP Client Configuration for Docker
For HTTP-based MCP clients connecting to the Dockerized server, use the appropriate endpoint:
{
"mcpServers": {
"vectra-ai-mcp": {
"transport": {
"type": "http",
"url": "http://localhost:8000/"
}
}
}
}
Docker Health Checks
The Docker container includes health checks that will verify the server is running properly:
- For
stdiotransport: Always reports healthy (no HTTP endpoint to check) - For HTTP transports: Checks HTTP endpoint availability
Note: MCP (Model Context Protocol) is an emerging and rapidly evolving technology. Exercise caution when using this server and follow security best practices, including proper credential management and network security measures.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.