Opus MCP Server
Enables programmatic interaction with Opus workflow automation platform, allowing users to initiate jobs, execute workflows, monitor status, upload files, and retrieve results through the Opus Job Operator API.
README
Opus MCP Server
Model Context Protocol (MCP) server for the Opus Job Operator API. This server provides tools to programmatically interact with Opus workflows, including job initiation, execution, monitoring, and file uploads.
Quick Start: See QUICKSTART.md for 3-minute setup!
Installation: See INSTALL.md for detailed instructions
Distribution: See DISTRIBUTION.md for sharing options
Features
- Workflow Management: Get workflow details and schemas
- Job Operations: Initiate, execute, and monitor jobs
- File Handling: Generate presigned URLs for secure file uploads
- Monitoring: Check job status, retrieve results, and view audit logs
Installation
- Install dependencies:
npm install
- Build the project:
npm run build
- Set up environment variables:
cp .env.example .env
# Edit .env and add your OPUS_SERVICE_KEY
Getting Your API Key
- Navigate to the Opus platform
- Click My Organization at the top
- Click the gear icon next to your Organization's name
- Select API Keys from the settings menu
- Click + Generate API Key
- Copy the key (shown only once) and add to
.envfile
Configuration
For Claude Desktop
Add to your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"opus": {
"command": "node",
"args": ["/absolute/path/to/function1 hackathon/build/index.js"],
"env": {
"OPUS_SERVICE_KEY": "your_service_key_here"
}
}
}
}
For Other MCP Clients
Use the built executable with environment variable:
OPUS_SERVICE_KEY=your_key node build/index.js
Available Tools
1. get_workflow_details
Get workflow details including the jobPayloadSchema that defines required inputs.
Parameters:
workflowId(string, required): The workflow ID from Opus
Returns: Full workflow details including schema, execution estimation, and input requirements
2. initiate_job
Create a new job instance for a workflow.
Parameters:
workflowId(string, required): The workflow IDtitle(string, required): Job titledescription(string, required): Job description
Returns: jobExecutionId needed for subsequent operations
3. generate_file_upload_url
Generate presigned URLs for secure file uploads.
Parameters:
fileExtension(string, required): File extension with dot (e.g.,.pdf,.jpeg,.png,.docx)accessScope(string, optional): Access scope -all,user,workspace, ororganization(default)
Returns:
presignedUrl: Use for uploading file via PUT requestfileUrl: Use this URL in job execution payload
Supported File Types: .jpeg, .png, .jpg, .pdf, .docx, .csv, .xls, .xlsx, .txt, .json, .html, .xml
4. execute_job
Execute a job with populated input values.
Parameters:
jobExecutionId(string, required): Frominitiate_jobresponsejobPayloadSchemaInstance(object, required): Job inputs structured according to workflow schema
Example jobPayloadSchemaInstance:
{
"workflow_input_we4tej0ly": {
"value": "API Test Project",
"type": "str"
},
"workflow_input_a0hk6ujuo": {
"value": 45.8,
"type": "float"
},
"workflow_input_h69vx5i4a": {
"value": "https://files.opus.com/media/private/uploaded/media_file.pdf",
"type": "file"
}
}
5. get_job_status
Check current job execution status.
Parameters:
jobExecutionId(string, required): The job ID
Returns: Status - IN PROGRESS, COMPLETED, or FAILED
6. get_job_results
Retrieve results from a completed job.
Parameters:
jobExecutionId(string, required): The job ID
Returns: Job results including output files and data (only works when status is COMPLETED)
7. get_job_audit_log
Get detailed audit log of job execution.
Parameters:
jobExecutionId(string, required): The job ID
Returns: Timestamped log of all system actions
Workflow Example
1. Get workflow details to understand required inputs
→ get_workflow_details(workflowId: "B9uGJfZ3CFwOdMKH")
2. (If needed) Generate file upload URLs
→ generate_file_upload_url(fileExtension: ".pdf")
→ Upload file to presignedUrl using PUT request
3. Initiate a job
→ initiate_job(workflowId, title, description)
→ Save jobExecutionId
4. Execute the job with inputs
→ execute_job(jobExecutionId, jobPayloadSchemaInstance)
5. Monitor status
→ get_job_status(jobExecutionId)
6. Get results when complete
→ get_job_results(jobExecutionId)
7. (Optional) View audit log
→ get_job_audit_log(jobExecutionId)
Development
Watch mode for development:
npm run watch
API Documentation
For complete API details, see the markdown files in the root directory:
opus-00-get-workflow-details.md- Workflow schema retrievalopus-03-initiate-job.md- Job initiationopus-04-uploading-files-for-job-inputs.md- File uploadsopus-05-execute-job.md- Job executionopus-06-get-job-execution-status.md- Status monitoringopus-07-get-job-execution-results.md- Results retrievalopus-08-job-audit-log.md- Audit logs
Base URL
All requests are made to: https://operator.opus.com
Troubleshooting
Error: OPUS_SERVICE_KEY environment variable is required
- Ensure your
.envfile exists with the correct key, or - Set the environment variable in your MCP client configuration
Error: Authentication failed
- Verify your service key is valid and not expired
- Check that the key has proper permissions in Opus
File upload issues
- Ensure file extension matches the actual file type
- Use PUT request without authentication headers for presigned URL
- Verify file size limits (if any) in Opus documentation
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.