mcp-labrat

mcp-labrat

An MCP server that enables interaction with CalDAV calendars to manage events and check availability through natural language or voice commands. It provides specific tools for listing, searching, and creating calendar entries using an OpenAI-compatible interface.

Category
Visit Server

README

mcp-labrat

Voice + MCP calendar demo.

This project is a small Express app that:

  • Hosts an MCP server (Streamable HTTP transport) that exposes calendar tools backed by a CalDAV server
  • Hosts an HTTP “MCP client” endpoint that sends a prompt to an OpenAI-compatible Chat Completions API and lets the model call those MCP tools
  • Serves a simple browser UI (in public/) that records audio, transcribes it (Whisper API), and sends the resulting prompt to the MCP client endpoint

Requirements

  • Node.js 18+ (Node 20+ recommended)
  • A CalDAV server + credentials (or accept the defaults, but tools won’t be useful)
  • An OpenAI-compatible API endpoint for:
    • Chat Completions (/v1/chat/completions)
    • Audio Transcriptions (/v1/audio/transcriptions)

In this lab, OPENAI_PROXY_URL typically points to a proxy service (not OpenAI directly). The proxy is responsible for adding any required authentication (e.g. API keys) to outgoing requests.

Because auth is handled by the proxy, the server uses direct HTTP requests (via fetchData) instead of the official OpenAI SDK.

Quick start

  1. Install

    npm install
    
  2. Create .env

    cp .env-sample .env
    

    Then edit .env and set OPENAI_PROXY_URL to your proxy/base URL (e.g. http://localhost:1234 or your deployed proxy).

  3. Run the dev server

    npm run dev
    
  4. Open the demo UI

    • Visit http://localhost:3000/
    • Click “Start Recording”, speak a command, then click “Stop Recording”
    • Click “Send”

Environment variables

Create a .env file in the project root.

Server

  • PORT (optional, default: 3000)
  • NODE_ENV (optional, e.g. development)

MCP client + OpenAI-compatible API

  • MCP_SERVER_URL (required)
    • Example: http://localhost:3000/api/v1/mcp
    • This can point back to the same server instance.
  • OPENAI_PROXY_URL (required)
    • Base URL of your OpenAI-compatible proxy (the app calls /v1/chat/completions and /v1/audio/transcriptions under it)
    • Example: http://localhost:1234
  • OPENAI_MODEL (optional, default: gpt-4o)

Audio transcription (Whisper)

  • OPENAI_TRANSCRIPTION_MODEL (optional, default: whisper-1)

CalDAV (calendar backing store)

  • CALDAV_SERVER_URL (optional, default: http://localhost:5232/)
  • CALDAV_USERNAME (optional, default: username)
  • CALDAV_PASSWORD (optional, default: password)

Radicale (dev CalDAV server)

For local development you can run a minimal Radicale CalDAV server with no authentication.

macOS / Linux

  1. Install Radicale
python3 -m pip install --user radicale
  1. Create a local config + data folder
mkdir -p radicale-data/collections
cat > radicale.config <<'EOF'
[server]
hosts = 127.0.0.1:5232

[auth]
type = none

[storage]
filesystem_folder = ./radicale-data/collections
EOF
  1. Run Radicale
radicale --config ./radicale.config

Windows (PowerShell)

  1. Install Radicale
py -m pip install --user radicale
  1. Create a local config + data folder
New-Item -ItemType Directory -Force -Path .\radicale-data\collections | Out-Null
@'
[server]
hosts = 127.0.0.1:5232

[auth]
type = none

[storage]
filesystem_folder = ./radicale-data/collections
'@ | Set-Content -Encoding UTF8 .\radicale.config
  1. Run Radicale
py -m radicale --config .\radicale.config
  1. Point this app to Radicale

In .env:

  • CALDAV_SERVER_URL=http://localhost:5232/
  • CALDAV_USERNAME=anything (ignored when auth=none)
  • CALDAV_PASSWORD=anything (ignored when auth=none)

Scripts

  • npm run dev – run server with nodemon + ts-node
  • npm run build – compile TypeScript to dist/
  • npm start – run the compiled server (dist/index.js)

API

Base path is /api/v1.

Health

GET /api/v1/

Returns a simple JSON message.

MCP server

POST /api/v1/mcp

Implements the MCP Streamable HTTP transport.

MCP client

POST /api/v1/client

Accepts either:

  1. JSON (text prompt)
curl -sS \
 -H 'Content-Type: application/json' \
 -d '{"prompt":"List my events","timezone":"Europe/Helsinki"}' \
 http://localhost:3000/api/v1/client
  1. multipart/form-data (audio upload)

The field name must be audio.

curl -sS \
 -F 'audio=@command.webm' \
 -F 'timezone=Europe/Helsinki' \
 http://localhost:3000/api/v1/client

Response:

{
  "answer": "...",
  "toolCalls": 2
}

MCP tools

The MCP server currently exposes:

  • listEvents – list events in the primary CalDAV calendar
  • getEventsInTimeSlot – check availability for a time slot (relative date inputs)
  • createEvent – create an event (relative date inputs + title + optional description/location)

The MCP client instructs the model to use tools for all user requests and applies some workflow rules (e.g., check availability before creating events when the user asks “if the time is free”).

Troubleshooting

  • Mic permission: The browser will prompt for microphone access the first time. If recording fails, check site permissions.
  • CalDAV auth: If the calendar tools always return empty results, verify CALDAV_* values and that your user has at least one calendar.
  • Proxy URL: OPENAI_PROXY_URL must be the base URL; the app calls /v1/chat/completions and /v1/audio/transcriptions under it.
  • Uploads cleanup: Uploaded audio is deleted best-effort after transcription; check filesystem permissions if uploads/ grows unexpectedly.

API doc: POST /api/v1/client

Runs a user prompt through the MCP client. The server will call an OpenAI-compatible Chat Completions API, and the model can invoke MCP calendar tools (via the MCP server URL).

Request

Supported content types:

  1. application/json (text prompt)
  • prompt (string, required) – the user command/question
  • timezone (string, optional) – IANA timezone name (defaults to the server’s default timezone). See: List of tz database time zones

JSON request body example:

{
  "prompt": "List my events",
  "timezone": "Europe/Helsinki"
}

Example:

curl -sS \
  -H 'Content-Type: application/json' \
  -d '{"prompt":"List my events","timezone":"Europe/Helsinki"}' \
  http://localhost:3000/api/v1/client

Option 2: multipart/form-data (audio)

  • audio (file, required) – audio file to transcribe (browser demo sends audio/webm)
  • timezone (string, optional) – IANA timezone name

In this mode the server transcribes the audio first and uses the transcription text as the prompt.

Example:

curl -sS \
  -F 'audio=@command.webm' \
  -F 'timezone=Europe/Helsinki' \
  http://localhost:3000/api/v1/client

Response

On success (HTTP 200):

{
  "answer": "...",
  "toolCalls": 2
}
  • answer (string) – final assistant output
  • toolCalls (number) – total number of tool calls made during the run

Errors

  • 400 – invalid request body (e.g., missing prompt in JSON, invalid timezone)
  • 500 – transcription failures, OpenAI/MCP errors, or unexpected server errors

Errors are returned as JSON:

{
  "message": "..."
}

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
E2B

E2B

Using MCP to run code via e2b.

Official
Featured