MLflow MCP Server

MLflow MCP Server

Enables AI assistants to interact with MLflow experiments, runs, and registered models. Supports browsing experiments, retrieving run details with metrics and parameters, and querying the model registry through natural language.

Category
Visit Server

README

MLflow MCP Server

A Model Context Protocol (MCP) server that provides seamless integration with MLflow, enabling AI assistants to interact with MLflow experiments, runs, and registered models.

Overview

This MCP server exposes MLflow functionality through a standardized protocol, allowing AI assistants like Claude to:

  • Browse and search MLflow experiments
  • Retrieve experiment runs and their details
  • Query registered models and model versions
  • Access metrics, parameters, and metadata

Features

Experiment Management

  • Get Experiment: Retrieve experiment details by ID
  • Get Experiment by Name: Find experiments by name
  • Search Experiments: List and filter experiments with pagination support

Run Management

  • Get Run: Retrieve detailed run information including metrics, parameters, and tags
  • Get Experiment Runs: List all runs for a specific experiment
  • Run Type Detection: Automatically identifies parent, child, or standalone runs

Model Registry

  • Get Registered Models: Search and list registered models
  • Get Model Versions: Browse model versions with filtering capabilities

Installation

Prerequisites

  • Python 3.11 or higher
  • uv package manager

Setup

  1. Clone the repository:
git clone <repository-url>
cd mlflow-mcp-server
  1. Install dependencies:
uv sync

Configuration

MLflow Connection

The server is pre-configured to connect to your internal MLflow instance:

  • Tracking URI: YOUR URI

To use with a different MLflow instance, modify mlflow_mcp_server/utils/mlflow_client.py:

import mlflow
from mlflow import MlflowClient

mlflow.set_tracking_uri("your-mlflow-tracking-uri")
client = MlflowClient()

MCP Configuration

Add the following configuration to your MCP client (e.g., ~/.cursor/mcp.json for Cursor):

{
  "mcpServers": {
    "mlflow": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/yesid-lopez/mlflow-mcp-server", "mlflow_mcp_server"],
      "env": {
        "MLFLOW_TRACKING_URI": "YOUR_TRACKING_URI"
      }
    }
  }
}

Replace /path/to/mlflow-mcp-server with the actual path to your project directory.

Usage

Running the Server

uv run -m mlflow_mcp_server

Available Tools

Once configured, the following tools become available to your AI assistant:

Experiment Tools

  • get_experiment(experiment_id: str) - Get experiment details by ID
  • get_experiment_by_name(experiment_name: str) - Get experiment by name
  • search_experiments(name?: str, token?: str) - Search experiments with optional filtering

Run Tools

  • get_run(run_id: str) - Get detailed run information
  • get_experiment_runs(experiment_id: str, token?: str) - List runs for an experiment

Model Registry Tools

  • get_registered_models(model_name?: str, token?: str) - Search registered models
  • get_model_versions(model_name?: str, token?: str) - Browse model versions

Example Usage with AI Assistant

You can now ask your AI assistant questions like:

  • "Show me all experiments containing 'recommendation' in the name"
  • "Get the details of run ID abc123 including its metrics and parameters"
  • "List all registered models and their latest versions"
  • "Find experiments related to customer segmentation"

Development

Project Structure

mlflow-mcp-server/
├── mlflow_mcp_server/
│   ├── __main__.py          # Server entry point
│   ├── server.py            # Main MCP server configuration
│   ├── tools/               # MLflow integration tools
│   │   ├── experiment_tools.py
│   │   ├── run_tools.py
│   │   └── registered_models.py
│   └── utils/
│       └── mlflow_client.py # MLflow client configuration
├── pyproject.toml           # Project dependencies
└── README.md

Dependencies

  • mcp[cli]: Model Context Protocol framework
  • mlflow: MLflow client library
  • pydantic: Data validation and serialization

Adding New Tools

To add new MLflow functionality:

  1. Create a new function in the appropriate tool file
  2. Add the tool to server.py:
    from mlflow_mcp_server.tools.your_module import your_function
    mcp.add_tool(your_function)
    

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

[Add your license information here]

Support

For issues and questions:

  • Check existing issues in the repository
  • Create a new issue with detailed reproduction steps

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured