Apideck MCP
Model Context Protocol server for the Apideck Unified API. Connect any MCP-compatible agent framework to 100+ accounting systems, HRIS platforms, file storage providers, and more through one integration. More information https://www.apideck.com/mcp-server
README
Apideck MCP Server
Model Context Protocol server for the Apideck Unified API. Connect any MCP-compatible agent framework to 200+ connectors — accounting systems, HRIS platforms, file storage providers, and more — through one integration.
Generated from Apideck's OpenAPI spec using Speakeasy.
Tools
330 tools across 10 unified APIs:
| API | Tools | Coverage |
|---|---|---|
| Accounting | 143 | Invoices, bills, payments, suppliers, customers, journal entries, ledger accounts, purchase orders, tax rates, P&L, balance sheet, and more |
| CRM | 50 | Companies, contacts, leads, opportunities, pipelines, notes, activities, users |
| File Storage | 32 | Files, folders, drives, shared links, upload sessions |
| HRIS | 25 | Employees, companies, departments, payrolls, time-off requests |
| Vault | 23 | Connections, consumers, sessions, custom mappings, logs |
| ATS | 15 | Applicants, applications, jobs |
| Issue Tracking | 15 | Collections, tickets, users, tags, comments |
| Connector | 8 | APIs, connectors, resources, coverage metadata |
| Ecommerce | 7 | Customers, orders, products, stores |
| Webhook | 6 | Webhook subscriptions, logs |
| Proxy | 6 | GET, POST, PUT, PATCH, DELETE, OPTIONS |
Hosted
The MCP server is live at:
https://mcp.apideck.dev/mcp
Pass Apideck credentials via headers:
| Header | Description |
|---|---|
x-apideck-api-key |
Your Apideck API key |
x-apideck-consumer-id |
The end-user/customer ID in your app |
x-apideck-app-id |
Your Apideck application ID |
Connect from Any Agent Framework
Remote (hosted — no installation needed)
# OpenAI Agents SDK (remote)
from agents import Agent
from agents.mcp import MCPServerHTTP
agent = Agent(
name="AP Agent",
mcp_servers=[MCPServerHTTP(
url="https://mcp.apideck.dev/mcp",
headers={
"x-apideck-api-key": "...",
"x-apideck-consumer-id": "...",
"x-apideck-app-id": "..."
}
)]
)
# Pydantic AI (remote)
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerHTTP
agent = Agent("anthropic:claude-sonnet-4-5", mcp_servers=[
MCPServerHTTP(
url="https://mcp.apideck.dev/mcp",
headers={
"x-apideck-api-key": "...",
"x-apideck-consumer-id": "...",
"x-apideck-app-id": "..."
}
)
])
# LangChain / LangGraph (remote)
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({
"apideck": {
"url": "https://mcp.apideck.dev/mcp",
"transport": "streamable_http",
"headers": {
"x-apideck-api-key": "...",
"x-apideck-consumer-id": "...",
"x-apideck-app-id": "..."
}
}
})
tools = await client.get_tools()
Claude Desktop / Cursor / Windsurf
Add to your MCP client config:
{
"mcpServers": {
"apideck": {
"url": "https://mcp.apideck.dev/mcp",
"headers": {
"x-apideck-api-key": "YOUR_API_KEY",
"x-apideck-consumer-id": "YOUR_CONSUMER_ID",
"x-apideck-app-id": "YOUR_APP_ID"
}
}
}
}
Local (stdio — for development)
npm install
# Dynamic mode (default — progressive discovery, 4 meta-tools, ~1,300 tokens)
node bin/mcp-server.js start --api-key "$APIDECK_API_KEY" --consumer-id "$APIDECK_CONSUMER_ID" --app-id "$APIDECK_APP_ID"
# Static mode (all 330 tools)
node bin/mcp-server.js start --api-key "$APIDECK_API_KEY" --consumer-id "$APIDECK_CONSUMER_ID" --app-id "$APIDECK_APP_ID" --mode static
# Read-only tools only
node bin/mcp-server.js start --api-key "$APIDECK_API_KEY" --consumer-id "$APIDECK_CONSUMER_ID" --app-id "$APIDECK_APP_ID" --scope read
# OpenAI Agents SDK (local stdio)
from agents import Agent
from agents.mcp import MCPServerStdio
mcp = MCPServerStdio(name="apideck", params={
"command": "node",
"args": ["bin/mcp-server.js", "start", "--mode", "dynamic"],
"env": {
"APIDECK_API_KEY": "...",
"APIDECK_CONSUMER_ID": "...",
"APIDECK_APP_ID": "..."
}
})
agent = Agent(name="AP Agent", mcp_servers=[mcp])
# Pydantic AI (local stdio)
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
agent = Agent("anthropic:claude-sonnet-4-5", mcp_servers=[
MCPServerStdio("node", [
"bin/mcp-server.js", "start", "--mode", "dynamic",
"--api-key", "...", "--consumer-id", "...", "--app-id", "..."
])
])
Static vs Dynamic Mode
| Mode | Tools exposed | Initial tokens | Best for |
|---|---|---|---|
dynamic (default) |
4 meta-tools: list_tools, describe_tool_input, execute_tool, list_scopes |
~1,300 | General-purpose agents, token-sensitive contexts |
static |
All 330 tools | ~35-55K | Focused agents doing specific operations |
In dynamic mode, agents discover tools progressively:
list_tools({"search_terms": ["invoices"]})
→ accounting-invoices-list, accounting-invoices-create, accounting-invoices-get, ...
describe_tool_input({"tool_names": ["accounting-invoices-list"]})
→ Full JSON Schema with all parameters
execute_tool({"tool_name": "accounting-invoices-list", "input": {"request": {"limit": 10}}})
→ Invoice data from the connected accounting system
Configuring Included APIs
The generate-overlay.py script controls which Apideck APIs are included:
# Default: all unified APIs except SMS (330 tools)
python generate-overlay.py accounting,ats,connector,crm,ecommerce,fileStorage,hris,issueTracking,proxy,vault,webhook
# Accounting only (143 tools)
python generate-overlay.py accounting
# Custom selection
python generate-overlay.py accounting,hris,vault
# All APIs including SMS (~334 tools)
python generate-overlay.py all
# Then regenerate + apply fixes:
speakeasy run
./post-generate.sh
Regeneration
The MCP server is generated from Apideck's Speakeasy-optimized OpenAPI spec. To regenerate after spec changes:
# 1. Optionally reconfigure APIs
python generate-overlay.py accounting,fileStorage,hris,vault,proxy
# 2. Regenerate
speakeasy run
# 3. Apply post-generation fixes (Zod transforms fix + wrangler.toml)
./post-generate.sh
# 4. Run tests
node bin/mcp-server.js serve --port 4567 --mode dynamic --log-level error &
npx tsx test/mcp-server.test.ts
Scopes
Tools are annotated with scopes for fine-grained control:
| Scope | HTTP methods | Flag |
|---|---|---|
read |
GET, HEAD | --scope read |
write |
POST, PUT, PATCH | --scope write |
destructive |
DELETE | --scope destructive |
Testing
# Local
node bin/mcp-server.js serve --port 4567 --mode dynamic --log-level error &
npx tsx test/mcp-server.test.ts
# Remote
MCP_URL=https://mcp.apideck.dev/mcp npx tsx test/mcp-server.test.ts
License
MIT
<!-- Start Summary [summary] -->
Summary
Apideck: The Apideck OpenAPI Spec: SDK Optimized
For more information about the API: Apideck Developer Docs <!-- End Summary [summary] -->
<!-- Start Table of Contents [toc] -->
Table of Contents
<!-- $toc-max-depth=2 -->
<!-- End Table of Contents [toc] -->
<!-- Start Installation [installation] -->
Installation
[!TIP] To finish publishing your MCP Server to npm and others you must run your first generation action. <details> <summary>Claude Desktop</summary>
Install the MCP server as a Desktop Extension using the pre-built mcp-server.mcpb file:
Simply drag and drop the mcp-server.mcpb file onto Claude Desktop to install the extension.
The MCP bundle package includes the MCP server and all necessary configuration. Once installed, the server will be available without additional setup.
[!NOTE] MCP bundles provide a streamlined way to package and distribute MCP servers. Learn more about Desktop Extensions.
</details>
<details> <summary>Cursor</summary>
Or manually:
- Open Cursor Settings
- Select Tools and Integrations
- Select New MCP Server
- If the configuration file is empty paste the following JSON into the MCP Server Configuration:
{
"command": "npx",
"args": [
"@apideck/mcp",
"start",
"--api-key",
"",
"--consumer-id",
"",
"--app-id",
""
]
}
</details>
<details> <summary>Claude Code CLI</summary>
claude mcp add ApideckMcp -- npx -y @apideck/mcp start --api-key --consumer-id --app-id
</details> <details> <summary>Gemini</summary>
gemini mcp add ApideckMcp -- npx -y @apideck/mcp start --api-key --consumer-id --app-id
</details> <details> <summary>Windsurf</summary>
Refer to Official Windsurf documentation for latest information
- Open Windsurf Settings
- Select Cascade on left side menu
- Click on
Manage MCPs. (To Manage MCPs you should be signed in with a Windsurf Account) - Click on
View raw configto open up the mcp configuration file. - If the configuration file is empty paste the full json
{
"command": "npx",
"args": [
"@apideck/mcp",
"start",
"--api-key",
"",
"--consumer-id",
"",
"--app-id",
""
]
}
</details> <details> <summary>VS Code</summary>
Or manually:
Refer to Official VS Code documentation for latest information
- Open Command Palette
- Search and open
MCP: Open User Configuration. This should open mcp.json file - If the configuration file is empty paste the full json
{
"command": "npx",
"args": [
"@apideck/mcp",
"start",
"--api-key",
"",
"--consumer-id",
"",
"--app-id",
""
]
}
</details> <details> <summary> Stdio installation via npm </summary> To start the MCP server, run:
npx @apideck/mcp start --api-key --consumer-id --app-id
For a full list of server arguments, run:
npx @apideck/mcp --help
</details> <!-- End Installation [installation] -->
<!-- Start Progressive Discovery [dynamic-mode] -->
Progressive Discovery
MCP servers with many tools can bloat LLM context windows, leading to increased token usage and tool confusion. Dynamic mode solves this by exposing only a small set of meta-tools that let agents progressively discover and invoke tools on demand.
To enable dynamic mode, pass the --mode dynamic flag when starting your server:
{
"mcpServers": {
"ApideckMcp": {
"command": "npx",
"args": ["@apideck/mcp", "start", "--mode", "dynamic"],
// ... other server arguments
}
}
}
In dynamic mode, the server registers only the following meta-tools instead of every individual tool:
list_tools: Lists all available tools with their names and descriptions.describe_tool_input: Returns the input schema for one or more tools by name.execute_tool: Executes a tool by name with its arguments.list_scopes: Lists the scopes available on the server.
This approach significantly reduces the number of tokens sent to the LLM on each request, which is especially useful for servers with a large number of tools.
You can combine dynamic mode with scope and tool filters:
{
"mcpServers": {
"ApideckMcp": {
"command": "npx",
"args": ["@apideck/mcp", "start", "--mode", "dynamic", "--scope", "destructive"],
// ... other server arguments
}
}
}
<!-- End Progressive Discovery [dynamic-mode] -->
<!-- Placeholder for Future Speakeasy SDK Sections -->
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.