AI Customer Support Agent
An MCP-compatible server providing tools for retrieving order statuses, searching knowledge bases, and managing support tickets. It facilitates automated customer service interactions by exposing internal CRM and database functions through a JSON-RPC interface.
README
AI Customer Support Agent
This project is a production-style starter for an AI customer support system built with FastAPI, a lightweight MCP-compatible tool server, and an OpenAI-compatible LLM client.
Features
- Answers customer questions with a knowledge base connector
- Retrieves order status from a database
- Creates support tickets for unresolved issues
- Summarizes conversations for human agents
- Detects likely escalation scenarios
- Exposes required tools through an MCP-style JSON-RPC endpoint
Project Structure
ai-support-agent/
├── backend/
│ ├── agent.py
│ ├── main.py
│ ├── mcp_server.py
│ ├── database/
│ │ └── models.py
│ └── tools/
│ ├── crm_tool.py
│ ├── kb_tool.py
│ ├── order_tool.py
│ └── ticket_tool.py
├── frontend/
│ └── simple_chat_ui.html
├── requirements.txt
└── README.md
Architecture
FastAPI backend: serves the/chatAPI, health endpoint, and the simple browser UI.SupportAgent: orchestrates LLM responses and decides when to call tools.MCP server: exposesget_order_status,search_knowledge_base,create_support_ticket, andget_customer_detailsover JSON-RPC at/mcp.Connectors: isolated tool classes for orders, CRM, knowledge base, and ticketing.Database: SQLAlchemy models forcustomers,orders, andsupport_tickets.
Tech Choices
- Python 3.11+
- FastAPI
- SQLAlchemy
- SQLite by default, with a clear upgrade path to PostgreSQL
- OpenAI-compatible SDK via the
openaiPython package
Setup
- Create and activate a virtual environment.
- Install dependencies:
pip install -r requirements.txt
- Optional: configure an OpenAI-compatible provider.
export OPENAI_API_KEY="your_api_key_here"
export OPENAI_MODEL="gpt-4.1-mini"
export OPENAI_BASE_URL="https://api.openai.com/v1"
If OPENAI_API_KEY is not set, the app still runs in a rules-based fallback mode so you can test the flows locally.
Run
uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000
Then open http://localhost:8000.
API Usage
POST /chat
Request:
{
"customer_id": 1,
"message": "Where is my order #45231?",
"conversation_history": []
}
Response:
{
"response": "Order #45231 for Noise-Cancelling Headphones is currently shipped. Carrier: FedEx. Tracking: ZX991245US. Estimated delivery: 2026-03-15.",
"used_tools": [
{
"name": "get_order_status",
"arguments": {
"order_id": "45231"
},
"result": {
"order_id": 45231,
"customer_id": 1,
"item_name": "Noise-Cancelling Headphones",
"status": "shipped",
"tracking_number": "ZX991245US",
"shipping_carrier": "FedEx",
"estimated_delivery": "2026-03-15",
"total_amount": 199.99
}
}
],
"escalated": false,
"conversation_summary": "Customer 1 asked: Where is my order #45231?. Agent responded: ...",
"llm_mode": false
}
POST /mcp
Example initialization request:
{
"jsonrpc": "2.0",
"id": "1",
"method": "initialize",
"params": {}
}
Example tool list request:
{
"jsonrpc": "2.0",
"id": "2",
"method": "tools/list",
"params": {}
}
Example tool call request:
{
"jsonrpc": "2.0",
"id": "3",
"method": "tools/call",
"params": {
"name": "get_order_status",
"arguments": {
"order_id": "45231"
}
}
}
Database
The app creates and seeds a local SQLite database file named support_agent.db on startup with example customers, orders, and support tickets.
To migrate to PostgreSQL:
- replace
DATABASE_URLinbackend/database/models.py - update the engine configuration
- add migrations with Alembic for production deployments
Example Agent Flow
User message:
Where is my order #45231?
Expected flow:
- Agent detects an order-tracking request.
- Agent calls
get_order_status(order_id). - Tool returns shipping carrier, tracking number, and estimated delivery.
- Agent responds with a concise customer-facing answer.
Error Handling
- Invalid order IDs return
400 - Unknown customers return
400 - Unexpected backend failures return
500 - Tool errors are surfaced in structured JSON-RPC format on the MCP endpoint
Notes
- The knowledge base is intentionally simple and in-memory for easy extension.
- The ticketing and CRM connectors are implemented as modular service classes so they can be swapped with real APIs later.
- For a production deployment, add authentication, persistent conversation storage, rate limiting, and observability.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.