TekAutomate MCP Server
An AI orchestration layer that enables automated control and validation of test and measurement instruments using SCPI commands and tm_devices. It provides tools for project knowledge retrieval, command materialization, and deterministic shortcuts to streamline instrumentation workflows.
README
TekAutomate MCP Server
This server is the AI orchestration layer for TekAutomate. It accepts full workspace context from the app, runs tool-assisted reasoning (or deterministic shortcuts), validates output, and returns applyable ACTIONS_JSON.
What it does
- Hosts AI chat endpoint used by TekAutomate (
/ai/chat). - Proxies OpenAI Responses API with a server-owned key and vector store (
/ai/responses-proxy). - Loads and indexes local project knowledge.
public/commands/*.json(SCPI command truth source).public/rag/*.json(retrieval chunks).public/templates/*.json(workflow examples).mcp-server/policies/*.md(behavior and output constraints).- Exposes a tool catalog for retrieval, validation, and optional live instrument probing.
- Applies post-check and repair logic before returning final text.
- Stores request/debug artifacts for diagnostics.
High-level flow
- TekAutomate sends
POST /ai/chatwith:
userMessage- provider/model/key
- full
flowContext(steps, backend, model family, selected step, validation state) - full
runContext(logs/audit/exit code) - optional
instrumentEndpoint(code executor + VISA resource)
- MCP server runs
runToolLoop(...):
- deterministic shortcut path when eligible
- or provider path (OpenAI hosted Responses/tool loop, OpenAI chat-completions fallback, Anthropic)
- Post-check validates and normalizes response:
ACTIONS_JSONstructure- step schema and IDs
saveAspresence/deduplication- SCPI verification pipeline
- prose truncation guard
- Server returns JSON payload:
textdisplayTextopenaiThreadIderrorswarningsmetrics
Endpoints
GET /health- returns
{ ok: true, status: "ready" }when indexes are loaded. GET /ai/debug/last- returns last debug bundle (prompts, timings, tool trace metadata).
POST /ai/chat- main orchestration endpoint for TekAutomate assistant.
POST /ai/responses-proxy- streaming Responses proxy using
OPENAI_SERVER_API_KEYand optionalCOMMAND_VECTOR_STORE_ID. POST /ai/key-test- validates provider/key/model reachability (
openaioranthropic). POST /ai/models- lists available models for given provider/key.
Tooling surface
Server tools are grouped into retrieval, materialization, validation, and live-instrument calls.
- Retrieval tools:
search_scpiget_command_groupget_command_by_headerget_commands_by_header_batchsearch_tm_devicesretrieve_rag_chunkssearch_known_failuresget_template_examplesget_policylist_valid_step_typesget_block_schema- Materialization tools:
materialize_scpi_commandmaterialize_scpi_commandsfinalize_scpi_commandsmaterialize_tm_devices_call- Validation tools:
validate_action_payloadvalidate_device_contextverify_scpi_commands- Live instrument tools (via code executor):
get_instrument_stateprobe_commandget_visa_resourcesget_environment
Deterministic shortcut features
The server includes shortcut builders for common requests to produce fast, consistent actions without full model/tool loops when conditions match.
- Measurement shortcut (including scoped channel handling and standard measurement sets).
- FastFrame shortcut for pyvisa flows.
- Common pyvisa server shortcut for frequent setup/build patterns.
tm_devicesmeasurement shortcut.- Planner-driven deterministic shortcut from parsed intent + command index.
These shortcuts still pass through post-check before response.
Safety and output enforcement
- Strict action schema validation (
validate_action_payload). - Replace-flow hardening.
- ensures step IDs are present and unique
- can auto-group long flat flows into logical groups
- enforces/repairs query
saveAs - deduplicates save names
- SCPI verification and source-backed command handling.
- Python substitution guard in non-python flows.
- Response prose truncation guard (
MCP_POSTCHECK_MAX_PROSE_CHARS, default 1200). - Prompt/policy driven constraints loaded from:
mcp-server/prompts/*.mdmcp-server/policies/*.md
Data and indexes
At startup, the server initializes:
- Command index (
public/commands/*.json) - tm_devices index
- RAG indexes (
public/rag/*.json) - Template index (
public/templates/*.json)
Command sources include modern and legacy scope families plus AFG, AWG, SMU, DPOJET, TekExpress, and RSA datasets.
Frontend integration
Current app integration resolves MCP host from:
localStorage["tekautomate.mcp.host"]- or
REACT_APP_MCP_HOST - fallback:
http://localhost:8787only on localhost app hosts
Example:
localStorage.setItem('tekautomate.mcp.host', 'http://localhost:8787');
Run locally
cd mcp-server
npm install
npm run start
Default port is 8787 unless MCP_PORT is set.
Environment variables
Copy .env.example to .env and set what you need.
- Required for
/ai/responses-proxy: OPENAI_SERVER_API_KEY- Optional retrieval augmentation:
COMMAND_VECTOR_STORE_ID- OpenAI routing/model controls:
OPENAI_BASE_URLOPENAI_DEFAULT_MODELOPENAI_FLOW_MODELOPENAI_REASONING_MODELOPENAI_ASSISTANT_MODELOPENAI_MAX_OUTPUT_TOKENS- Hosted prompt controls:
OPENAI_PROMPT_IDOPENAI_PROMPT_VERSION- legacy fallback accepted:
OPENAI_ASSISTANT_ID - Prompt file overrides:
TEKAUTOMATE_STEPS_INSTRUCTIONS_FILETEKAUTOMATE_BLOCKLY_INSTRUCTIONS_FILE- Post-check tuning:
MCP_POSTCHECK_MAX_PROSE_CHARS- Server:
MCP_PORT
Scripts and verification
npm run start/npm run devnpm run eval:comprehensivenpm run eval:levelsnpm run verify:command-groups
Reference benchmark:
mcp-server/reports/level-benchmark-2026-03-18.mdshows 40/40 PASS in that run.
Logs and debug artifacts
- Last debug state:
GET /ai/debug/last - Request logs are written under
mcp-server/src/logs/requests(rotated, max 500 files). - Additional logs and reports are under
mcp-server/logsandmcp-server/reports.
Internals: Planner, Materializers, and AI Routing
Intent planner (src/core/intentPlanner.ts)
The planner is a deterministic parser + resolver layer used before (and sometimes instead of) LLM output.
Main responsibilities:
- Parse user text into structured intent fields (channels, trigger, measurements, bus decode, acquisition, save/recall, status, AFG/AWG/SMU/RSA).
- Detect device type and map request to relevant command families.
- Resolve concrete SCPI candidates against the command index.
- Return unresolved intents when command mapping is ambiguous.
- Run conflict checks (resource collisions / inconsistent intent combinations).
Core exported functions:
parseIntent(...): buildsPlannerIntentfrom natural language.planIntent(...): parse + resolve + conflict check, returningPlannerOutput.resolve*Commands(...): domain resolvers such asresolveTriggerCommands,resolveMeasurementCommands,resolveBusCommands,resolveSaveCommands, etc.parse*Intent(...): focused parsers such asparseChannelIntent,parseTriggerIntent,parseMeasurementIntent,parseBusIntent, etc.
SCPI source of truth (src/core/commandIndex.ts)
- Loads command JSON files from
public/commandsonce at startup. - Normalizes heterogeneous command shapes (manual-entry rich format and flat format).
- Builds fast lookup structures for:
- exact header lookup (
getByHeader) - prefix lookup (
getByHeaderPrefix) - ranked query search (
searchByQuery) - Supports placeholder-aware header normalization (
CH<x>,MEAS<x>,BUS<x>,{A|B}, etc.).
Current local index size (measured):
- ~
9307normalized command records.
SCPI retrieval functions
search_scpi(src/tools/searchScpi.ts): query search + header-like direct matching merge.get_command_by_header: exact deterministic match for known headers.get_commands_by_header_batch: batch exact lookup for multiple headers.get_command_group: feature-area retrieval (group-level).verify_scpi_commands(src/tools/verifyScpiCommands.ts): validates commands (including exact syntax mode).
Materializers
Materializers convert canonical records into concrete, applyable commands/calls.
materialize_scpi_command:- selects set/query syntax
- infers placeholder bindings from
concreteHeader - applies explicit bindings + argument values
- checks unresolved placeholders
- runs exact verification before returning success
materialize_scpi_commands: batch wrapper around single materializer.finalize_scpi_commands: batch materialize + verified output packaging, used as endgame tool in hosted flows.materialize_tm_devices_call: builds exact Python call from verifiedmethodPathand arguments.
Tool loop and when server goes to AI for more info
Routing is centralized in src/core/toolLoop.ts.
Deterministic path first (no external model):
- In
mcp_onlymode, server tries deterministic shortcuts and planner synthesis first. - If planner fully resolves commands, it can return applyable
ACTIONS_JSONdirectly. - If unresolved in
mcp_only, server returns findings/suggested fixes instead of calling external AI.
AI path (mcp_ai and normal hosted usage):
- If deterministic path is not enough, server calls provider path:
- OpenAI hosted Responses (preferred for structured build/edit)
- OpenAI chat-completions fallback
- Anthropic messages path
- For hosted structured build, server preloads source-of-truth context via tools (
search_scpi,get_command_group,get_commands_by_header_batch, orsearch_tm_devices) before/within the loop. - Tool rounds are capped (
4for hosted structured build,3default,8when forced tool mode).
Reliability fallbacks after AI response:
- Post-check pass validates and normalizes output.
- If model returns non-actionable output, server attempts hybrid planner gap-fill.
- If
ACTIONS_JSONis malformed, server retries once with strict JSON-only instruction. - If model output is weak in specific cases, server can fallback to deterministic shortcut output.
Performance snapshot
From checked-in benchmark report:
mcp-server/reports/level-benchmark-2026-03-18.md: 40/40 PASS.- In that run, per-case
totalMsranged from about1msto254ms.
Local micro-benchmark (quick developer run on this workspace; indicative, not production SLA):
searchByQueryaverage: ~0.54 msper lookup.getByHeaderaverage: ~0.009 msper lookup (hot path).materializeScpiCommandaverage (single-command path with verification): ~25.4 ms.finalizeScpiCommandsaverage for 3-command batch: ~1.8 ms.
Use these as practical engineering baselines; real end-to-end latency depends more on provider/model calls than local index lookup.
When to use MCP-only vs MCP+AI
Use mcp_only when:
- You want deterministic/local command resolution.
- You prefer speed and strictness over open-ended reasoning.
- The request is explicit enough for planner/materializers.
Use mcp_ai when:
- Request is complex, ambiguous, or cross-domain.
- You need richer reasoning, explanation, or conflict tradeoffs.
- Deterministic planner reports unresolved intent and you want model help.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.