mcp-labrat
An MCP server that enables interaction with CalDAV calendars to manage events and check availability through natural language or voice commands. It provides specific tools for listing, searching, and creating calendar entries using an OpenAI-compatible interface.
README
mcp-labrat
Voice + MCP calendar demo.
This project is a small Express app that:
- Hosts an MCP server (Streamable HTTP transport) that exposes calendar tools backed by a CalDAV server
- Hosts an HTTP “MCP client” endpoint that sends a prompt to an OpenAI-compatible Chat Completions API and lets the model call those MCP tools
- Serves a simple browser UI (in
public/) that records audio, transcribes it (Whisper API), and sends the resulting prompt to the MCP client endpoint
Requirements
- Node.js 18+ (Node 20+ recommended)
- A CalDAV server + credentials (or accept the defaults, but tools won’t be useful)
- An OpenAI-compatible API endpoint for:
- Chat Completions (
/v1/chat/completions) - Audio Transcriptions (
/v1/audio/transcriptions)
- Chat Completions (
In this lab, OPENAI_PROXY_URL typically points to a proxy service (not OpenAI directly). The proxy is responsible for adding any required authentication (e.g. API keys) to outgoing requests.
Because auth is handled by the proxy, the server uses direct HTTP requests (via fetchData) instead of the official OpenAI SDK.
Quick start
-
Install
npm install -
Create
.envcp .env-sample .envThen edit
.envand setOPENAI_PROXY_URLto your proxy/base URL (e.g.http://localhost:1234or your deployed proxy). -
Run the dev server
npm run dev -
Open the demo UI
- Visit
http://localhost:3000/ - Click “Start Recording”, speak a command, then click “Stop Recording”
- Click “Send”
- Visit
Environment variables
Create a .env file in the project root.
Server
PORT(optional, default:3000)NODE_ENV(optional, e.g.development)
MCP client + OpenAI-compatible API
MCP_SERVER_URL(required)- Example:
http://localhost:3000/api/v1/mcp - This can point back to the same server instance.
- Example:
OPENAI_PROXY_URL(required)- Base URL of your OpenAI-compatible proxy (the app calls
/v1/chat/completionsand/v1/audio/transcriptionsunder it) - Example:
http://localhost:1234
- Base URL of your OpenAI-compatible proxy (the app calls
OPENAI_MODEL(optional, default:gpt-4o)
Audio transcription (Whisper)
OPENAI_TRANSCRIPTION_MODEL(optional, default:whisper-1)
CalDAV (calendar backing store)
CALDAV_SERVER_URL(optional, default:http://localhost:5232/)CALDAV_USERNAME(optional, default:username)CALDAV_PASSWORD(optional, default:password)
Radicale (dev CalDAV server)
For local development you can run a minimal Radicale CalDAV server with no authentication.
macOS / Linux
- Install Radicale
python3 -m pip install --user radicale
- Create a local config + data folder
mkdir -p radicale-data/collections
cat > radicale.config <<'EOF'
[server]
hosts = 127.0.0.1:5232
[auth]
type = none
[storage]
filesystem_folder = ./radicale-data/collections
EOF
- Run Radicale
radicale --config ./radicale.config
Windows (PowerShell)
- Install Radicale
py -m pip install --user radicale
- Create a local config + data folder
New-Item -ItemType Directory -Force -Path .\radicale-data\collections | Out-Null
@'
[server]
hosts = 127.0.0.1:5232
[auth]
type = none
[storage]
filesystem_folder = ./radicale-data/collections
'@ | Set-Content -Encoding UTF8 .\radicale.config
- Run Radicale
py -m radicale --config .\radicale.config
- Point this app to Radicale
In .env:
CALDAV_SERVER_URL=http://localhost:5232/CALDAV_USERNAME=anything(ignored whenauth=none)CALDAV_PASSWORD=anything(ignored whenauth=none)
Scripts
npm run dev– run server with nodemon + ts-nodenpm run build– compile TypeScript todist/npm start– run the compiled server (dist/index.js)
API
Base path is /api/v1.
Health
GET /api/v1/
Returns a simple JSON message.
MCP server
POST /api/v1/mcp
Implements the MCP Streamable HTTP transport.
MCP client
POST /api/v1/client
Accepts either:
- JSON (text prompt)
curl -sS \
-H 'Content-Type: application/json' \
-d '{"prompt":"List my events","timezone":"Europe/Helsinki"}' \
http://localhost:3000/api/v1/client
multipart/form-data(audio upload)
The field name must be audio.
curl -sS \
-F 'audio=@command.webm' \
-F 'timezone=Europe/Helsinki' \
http://localhost:3000/api/v1/client
Response:
{
"answer": "...",
"toolCalls": 2
}
MCP tools
The MCP server currently exposes:
listEvents– list events in the primary CalDAV calendargetEventsInTimeSlot– check availability for a time slot (relative date inputs)createEvent– create an event (relative date inputs + title + optional description/location)
The MCP client instructs the model to use tools for all user requests and applies some workflow rules (e.g., check availability before creating events when the user asks “if the time is free”).
Troubleshooting
- Mic permission: The browser will prompt for microphone access the first time. If recording fails, check site permissions.
- CalDAV auth: If the calendar tools always return empty results, verify
CALDAV_*values and that your user has at least one calendar. - Proxy URL:
OPENAI_PROXY_URLmust be the base URL; the app calls/v1/chat/completionsand/v1/audio/transcriptionsunder it. - Uploads cleanup: Uploaded audio is deleted best-effort after transcription; check filesystem permissions if
uploads/grows unexpectedly.
API doc: POST /api/v1/client
Runs a user prompt through the MCP client. The server will call an OpenAI-compatible Chat Completions API, and the model can invoke MCP calendar tools (via the MCP server URL).
Request
Supported content types:
application/json(text prompt)
prompt(string, required) – the user command/questiontimezone(string, optional) – IANA timezone name (defaults to the server’s default timezone). See: List of tz database time zones
JSON request body example:
{
"prompt": "List my events",
"timezone": "Europe/Helsinki"
}
Example:
curl -sS \
-H 'Content-Type: application/json' \
-d '{"prompt":"List my events","timezone":"Europe/Helsinki"}' \
http://localhost:3000/api/v1/client
Option 2: multipart/form-data (audio)
audio(file, required) – audio file to transcribe (browser demo sendsaudio/webm)timezone(string, optional) – IANA timezone name
In this mode the server transcribes the audio first and uses the transcription text as the prompt.
Example:
curl -sS \
-F 'audio=@command.webm' \
-F 'timezone=Europe/Helsinki' \
http://localhost:3000/api/v1/client
Response
On success (HTTP 200):
{
"answer": "...",
"toolCalls": 2
}
answer(string) – final assistant outputtoolCalls(number) – total number of tool calls made during the run
Errors
400– invalid request body (e.g., missingpromptin JSON, invalidtimezone)500– transcription failures, OpenAI/MCP errors, or unexpected server errors
Errors are returned as JSON:
{
"message": "..."
}
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.