Lakeflow MCP Server
Enables AI agents to manage Databricks Lakeflow jobs by building and uploading Python wheels and triggering runs with specific arguments. It provides a structured way to orchestrate complex data experiments and monitor execution directly on Databricks clusters.
README
Launching jobs on Lakeflow
This tool is an opinionated way to spawn compute jobs on the cloud. By "compute
job", I mean a massively parallel data processing job like training a deep net,
analyzing a large corpus of text that's sitting in an S3 bucket, or 1000
parallel simulations of something. To let you do these things, this package
asks you to author your code as a Python package and forces you to specify your
package dependencies in a pyproject.toml. It then uploads that package (as a
python wheel) for Databricks to execute it.
This is heavier-weight than Databrick's built-in notebook approach of editing a Python script in their web UI. In return, it lets you capture large package dependencies across repos via git submodules, and import third party packages via uv. It's lighter-weight than most other job submission systems because it doesn't require you to build docker containers. Docker containers take a large snapshot of your system, enough to build a full unix environment. These snapshots are on the order of gigabytes and difficult to upload from a home computer. For most of our work, wheels provide all the containerization we need (a wheel is a few kilobytes).
It has one more opinion: That uv is a good way to capture those Python
dependencies, with a pyproject.toml. We're also exploring
Pants as a way to manage more complex packages. Pants can also export wheels, so nothing in this design prevents us from adoptig Pants.
You can use this tool to build your wheel, upload it to Databricks, spawn copies of it each with different command line arguments, and track your jobs's status. You can also use a Databricks UI to check the state of your jobs. The tool provides several interfaces:
- An MCP server so you can have AIs spawn jobs for you.
- A CLI you can use from the shell.
- A programmatic Python interface you can call from a Python program.
Getting access to Databricks
Check if you have access to Databrick by visiting this url. If you get stuck in an infinite loop where Databricks sends you a code that doesn't work, it means you don't have an account. Ask for one in #help-data-platform.
Your package's structure
This package assumes the package you want to run on the cluster has a
structure like this and it can be run with uv run:
my_project/
├── pyproject.toml
├── src/
└── my_package/
├── __init__.py
└── my_package_py.py
It also assumes you've added an entry point to your pyproject.toml called
"lakeflow-task". If your package is called my_package, and it has a driver
script called my_package_py.py, and the main function in this script is called
main, you would define the "lakeflow-task" entry point like this:
[project.scripts]
lakeflow-task = "my_package.my_package_py:main"
The package lakeflow_demo under this directory gives you a
concrete example of how to set up a package.
Building and launching your package with the CLI
To run the package on the cluster, first build the wheel, then upload it, then tell Databricks to run it.
-
Create the job from source:
You can use
create-job-from-sourceto build, upload, and create the job:uv run lakeflow.py create-job-from-source \ "my-lakeflow-job" \ "my-package" \ --target ~/my_project \ --max-workers 4 \ --secret-env-var MY_SECRET_KEY --secret-env-var MY_OTHER_SECRET_KEYThis returns the job ID, which we'll use in the next step. This doesn't yet run any jobs. It just starts a cluster that can run them. The
--max-workersargument sets the maximum number of workers for autoscaling. You can also pass environment variables to the remote job without leaking secrets (like API keys) through your command line. The tool reads the values from your local environment and uploads them to Databricks Secrets. The job can access these secrets using the Databricks dbutils API, with its own package name as the scope. -
Start the job:
uv run lakeflow.py trigger-run 123456 arg11 arg12 uv run lakeflow.py trigger-run 123456 arg21 arg22 uv run lakeflow.py trigger-run 123456 arg31 arg32This starts three instances of the job with three different sets of arguments. You can have the arguments refer to different shards of data, and kick off as many parallel jobs as you want. Your job can retrieve these arguments through argv. It can retrieve its job id from the environment variable
DATABRICKS_RUN_ID. -
Monitor the runs:
uv run lakeflow.py list-job-runs 123456This lists the runs for the given job ID.
-
Get Run Logs:
uv run lakeflow.py get-run-logs 987654321This retrieves the logs for a specific run ID. It takes the run returned by
trigger-run.
Using Python programmatic interface
The above illustrated how to use the CLI. You might find it easier to use the programmatic Python interface to the package instead. See run_lakeflow_demo.py for an example.
Using the MCP server
You can install this package as an MCP server. To do that, add this to ~/.cursor/mcp.json:
{
"mcpServers": {
"lakeflow": {
"command": "uv",
"args": [
"run",
"--quiet",
"--directory",
"/path/to/lakeflow-mcp",
"python",
"lakeflow.py"
],
"env": {
"DATABRICKS_HOST": "https://hims-machine-learning-staging-workspace.cloud.databricks.com",
"DATABRICKS_TOKEN": "<your token>"
}
},
...
}
}
Then you can ask the agent to do things like this:
let's launch 4 copies of this job on lakeflow, and pass them the arguments "fi", "fie", "fo", and "fum" respectively.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.