Chicken Business Management MCP Server
Enables real-time voice-to-text order processing and chicken business management through WebSocket connections and REST APIs. Supports inventory tracking, sales parsing, stock forecasting, and note collection with AI-powered transcript correction and structured data extraction.
README
mcp-server
Add to .env: JWT_SECRET=your_jwt_secret_here (generate a strong one, e.g., via openssl rand -hex 32)
Real-Time WebSockets
For live voice parsing (e.g., orders with inaccuracies), use WebSockets on port 3002.
Client Example (Browser with Web Speech API)
const ws = new WebSocket('ws://localhost:3002'); // Add ?token=your-jwt for auth
ws.onopen = () => console.log('Connected');
const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
recognition.continuous = true;
recognition.interimResults = true;
recognition.onresult = (event) => {
for (let i = event.resultIndex; i < event.results.length; i++) {
const chunk = event.results[i][0].transcript;
ws.send(JSON.stringify({
toolName: 'live_voice_stream',
params: { streamId: 'session1', transcriptChunk: chunk, products: [{id: 'whole', name: 'Whole Chicken'}] }
}));
}
};
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.partialParse) console.log('Partial:', data.partialParse); // e.g., {items: [{productId: 'whole', qty: 2}], confidence: 0.8}
if (data.final) console.log('Final sales:', data.final); // Structured {items, payment}
if (data.streamChunk) console.log('Gemini correction:', data.streamChunk);
};
recognition.start();
Server Protocol
- Connect: ws://localhost:3002 (upgrade with Authorization: Bearer <jwt>)
- Send: JSON {toolName: 'live_voice_stream', params: {streamId, transcriptChunk, products?}}
- Receive: Streamed {partialParse: {...}, confidence: number} or {final: {structuredSales: {...}}, streamId}
- Timeout: 5s for final parse; fuzzy + Gemini streaming handles "chikin" → "Whole Chicken".
Integrate with Todo 1 auth for secure streams.
Database Migrations
The server auto-migrates the DB on startup using service role (runs sql/enhanced-database-schema.sql if tables missing, idempotent check via notes table).
- Auto: On npm run dev/start, migrate() called in constructor—logs to ai_audit_logs.
- Manual: Build (npm run build), then node dist/migrate.js (standalone run, logs errors).
- Schema: Edit sql/enhanced-database-schema.sql for changes (tables: notes, entities, relations, observations, ai_audit_logs, products, sales; pgvector indexes).
- Fallback: If fails, manual Supabase SQL editor run schema.sql (service role bypasses RLS).
Scalability
For production scale, use cluster mode:
- Run: npm run start:cluster (env WORKERS=4 for multi-core).
- Guide: See [MD files/scalability.md](MD files/scalability.md) for multi-instance, concurrency opts (batch=5), load balancing (WS sticky).
API Integration Examples
Note Workflow (curl with JWT)
-
Auth: curl -X POST http://localhost:3002/auth -H "Content-Type: application/json" -d '{"token":"your_mcp_auth_token"}' → {"token": "jwt_here"}
-
Collect note: curl -X POST http://localhost:3002/api/tools/call -H "Authorization: Bearer jwt_here" -H "Content-Type: application/json" -d '{"name":"note_collection","arguments":{"content":"Bought 20 bags whole chicken for 10000 pesos","userRole":"owner"}}' → {"success":true,"result":{"note_id":"uuid","message":"Note collected"}}
-
Parse: curl -X POST ... -d '{"name":"parse_chicken_note","arguments":{"note_id":"uuid"}}' → {"success":true,"result":{"parsed_data":{"purchases":[{"productId":"whole","qty":20,"cost":10000}],"status":"parsed"}}}
-
Apply: curl -X POST ... -d '{"name":"apply_to_stock","arguments":{"note_id":"uuid"}}' → {"success":true,"message":"Stock updated"}
Deployment (Heroku/Codespace)
- Heroku: heroku create, git push heroku main, heroku config:set GEMINI_API_KEY=... SUPABASE_URL=... JWT_SECRET=... WORKERS=4
- Procfile: web: npm run start:cluster
- Codespace: npm run dev:cluster (env WORKERS=2)
- Scale: heroku ps:scale web=2 (multi-dyno load balance)
Troubleshooting
- Proxy retries: AdvancedGeminiProxy backoff 3x on 429/5xx (check logs).
- Rate limits: 10/min/user (Todo 1, 429 error); increase env MAX_REQUESTS_PER_MINUTE.
- DB issues: Auto-migrate on start; manual node dist/migrate.js if fail (check ai_audit_logs).
- WS auth: ws://localhost:3002?token=jwt_here for live_voice_stream.
Integration Examples
Note Workflow (curl with JWT)
-
Auth: curl -X POST http://localhost:3002/auth -H "Content-Type: application/json" -d '{"token":"your_mcp_auth_token"}' (get JWT).
-
Collect Note: curl -X POST http://localhost:3002/api/tools/call -H "Authorization: Bearer <jwt>" -H "Content-Type: application/json" -d '{"name":"note_collection","arguments":{"content":"Bought 20 bags whole chicken for ₱10k","userRole":"owner"}}' (response {note_id, success}).
-
Parse: curl -X POST ... -d '{"name":"parse_chicken_note","arguments":{"note_id":"<id>"}}' (response {parsed: {purchases: [{productId:"whole", qty:20, cost:10000}]}}).
-
Apply: curl -X POST ... -d '{"name":"apply_to_stock","arguments":{"note_id":"<id>","dry_run":false}}' (response {success, stockUpdated}).
Voice Stream (WS)
See Real-Time WebSockets section.
Forecast: curl -X POST ... -d '{"name":"forecast_stock","arguments":{"salesHistory":[{"date":"2025-09-22","amount":5000}]}}' (response {predictedSales: [{day:"1", amount:5500, confidence:0.8}], summary}).
Deployment
Heroku
- Procfile: web: npm start:cluster
- Env: GEMINI_API_KEY, SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY, MCP_AUTH_TOKEN, JWT_SECRET, PORT=3002, WORKERS=2
- Deploy: git push heroku main; heroku scale web=2 (multi-dyno for scale).
Codespace
- .devcontainer/devcontainer.json: postCreateCommand "npm i && npm run build"
- Run: npm run dev:cluster (env WORKERS=4)
See scalability.md for multi-instance.
Troubleshooting
- Proxy Retries: advanced-gemini-proxy.ts backoff on 429/5xx (env GEMINI_RETRY_MAX=3).
- Rate Limits: 10/min/user (per Todo 1); 429 response—wait 60s.
- DB Issues: Auto-migrate on start; manual node dist/migrate.js if fail (check Supabase logs).
- Auth Errors: 401/403—verify JWT expiry (1h, refresh via /auth); MCP_AUTH_TOKEN in .env.
- Tool Errors: 422 validation (check schemas in openapi.yaml); log ai_audit_logs for details.
- WS Disconnect: Reconnect on close, pass token in query ?token=<jwt>.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.