MCP Website Chatbot
A production-grade AI chatbot that enables real-time information retrieval and retrieval-augmented generation for website content. It integrates MCP tools to fetch live data while providing a responsive chat interface with strict guardrails against misinformation.
README
MCP Website Chatbot
A production-grade AI chatbot for srinivasanramanujam.sbs with live data retrieval via MCP (Model Context Protocol) and RAG (Retrieval-Augmented Generation).
š Features
- Live Data Integration ā MCP tools for real-time information retrieval
- RAG Support ā Static knowledge base from website content, blogs, and FAQs
- Hallucination Prevention ā Strict guardrails against fabrication and misinformation
- Beautiful UI ā Modern, responsive chat interface
- Production-Ready ā Scalable backend with proper error handling
- Health Monitoring ā Built-in health checks and uptime tracking
š Requirements
- Node.js 16+
- npm or yarn
- OpenAI API key (for production use)
š ļø Installation
# Install dependencies
npm install
# Create .env file
cat > .env << EOF
PORT=3000
OPENAI_API_KEY=your_key_here
EOF
# Start the server
npm run dev
š Project Structure
āāā server.js # Express server with chat API
āāā public/
ā āāā index.html # Chat UI
āāā system_prompt.txt # System prompt for the chatbot
āāā package.json # Dependencies
š API Endpoints
POST /api/chat
Send a message and get a response.
Request:
{
"message": "What's new on the website?",
"conversationHistory": []
}
Response:
{
"success": true,
"message": "Response text...",
"context": {
"requiresLiveData": true,
"toolsUsed": ["fetchLiveData"],
"timestamp": "2026-01-12T10:30:00Z"
}
}
GET /api/health
Check server health.
Response:
{
"status": "healthy",
"timestamp": "2026-01-12T10:30:00Z",
"uptime": 3600
}
GET /api/system-prompt
Retrieve the system prompt (for debugging).
šÆ How It Works
- User sends a message via the chat UI
- Server analyzes if live data is needed (time-sensitive, external sources)
- MCP tools are invoked if necessary to fetch real-time data
- Response is generated using the system prompt guidelines
- Assistant responds with proper citations and source attribution
š Security Features
- ā No system prompt exposure to users
- ā Input validation and sanitization
- ā Rate limiting ready (add middleware as needed)
- ā Error handling without leaking internal details
- ā CORS headers (add if deploying to production)
š Deployment
Option 1: Vercel (Recommended)
npm install -g vercel
vercel
Option 2: Heroku
heroku create your-app-name
git push heroku main
Option 3: Docker
Create a Dockerfile:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
šØ Customization
Update Website Info
Edit server.js and update the system prompt or knowledge base.
Change UI Theme
Modify the CSS in public/index.html gradient colors and styling.
Add Real API Integration
Replace mock MCP tools in server.js with real OpenAI/Claude API calls.
š System Prompt Highlights
- Live-first philosophy ā Prioritizes current data over static knowledge
- Hallucination prevention ā Refuses to guess or invent information
- Transparent reasoning ā Cites sources and explains reasoning
- Professional tone ā Clear, concise, helpful communication
- Safety guardrails ā Rejects prompt injection and abuse
š¦ Next Steps for Production
- Integrate OpenAI/Claude API ā Replace mock responses
- Add MCP server ā Real connection to external tools
- Set up database ā Store conversations and user data securely
- Add authentication ā Protect sensitive endpoints
- Configure CORS ā Allow cross-origin requests from your domain
- Enable logging ā Monitor and debug in production
- Add rate limiting ā Prevent abuse and control costs
š§ Support
For questions or issues, contact the site owner at srinivasanramanujam.sbs
š License
MIT License ā See LICENSE file for details
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
E2B
Using MCP to run code via e2b.