MCP Website Chatbot

MCP Website Chatbot

A production-grade AI chatbot that enables real-time information retrieval and retrieval-augmented generation for website content. It integrates MCP tools to fetch live data while providing a responsive chat interface with strict guardrails against misinformation.

Category
Visit Server

README

MCP Website Chatbot

A production-grade AI chatbot for srinivasanramanujam.sbs with live data retrieval via MCP (Model Context Protocol) and RAG (Retrieval-Augmented Generation).

šŸš€ Features

  • Live Data Integration – MCP tools for real-time information retrieval
  • RAG Support – Static knowledge base from website content, blogs, and FAQs
  • Hallucination Prevention – Strict guardrails against fabrication and misinformation
  • Beautiful UI – Modern, responsive chat interface
  • Production-Ready – Scalable backend with proper error handling
  • Health Monitoring – Built-in health checks and uptime tracking

šŸ“‹ Requirements

  • Node.js 16+
  • npm or yarn
  • OpenAI API key (for production use)

šŸ› ļø Installation

# Install dependencies
npm install

# Create .env file
cat > .env << EOF
PORT=3000
OPENAI_API_KEY=your_key_here
EOF

# Start the server
npm run dev

šŸ“ Project Structure

ā”œā”€ā”€ server.js           # Express server with chat API
ā”œā”€ā”€ public/
│   └── index.html      # Chat UI
ā”œā”€ā”€ system_prompt.txt   # System prompt for the chatbot
└── package.json        # Dependencies

šŸ”Œ API Endpoints

POST /api/chat

Send a message and get a response.

Request:

{
  "message": "What's new on the website?",
  "conversationHistory": []
}

Response:

{
  "success": true,
  "message": "Response text...",
  "context": {
    "requiresLiveData": true,
    "toolsUsed": ["fetchLiveData"],
    "timestamp": "2026-01-12T10:30:00Z"
  }
}

GET /api/health

Check server health.

Response:

{
  "status": "healthy",
  "timestamp": "2026-01-12T10:30:00Z",
  "uptime": 3600
}

GET /api/system-prompt

Retrieve the system prompt (for debugging).

šŸŽÆ How It Works

  1. User sends a message via the chat UI
  2. Server analyzes if live data is needed (time-sensitive, external sources)
  3. MCP tools are invoked if necessary to fetch real-time data
  4. Response is generated using the system prompt guidelines
  5. Assistant responds with proper citations and source attribution

šŸ” Security Features

  • āœ… No system prompt exposure to users
  • āœ… Input validation and sanitization
  • āœ… Rate limiting ready (add middleware as needed)
  • āœ… Error handling without leaking internal details
  • āœ… CORS headers (add if deploying to production)

🌐 Deployment

Option 1: Vercel (Recommended)

npm install -g vercel
vercel

Option 2: Heroku

heroku create your-app-name
git push heroku main

Option 3: Docker

Create a Dockerfile:

FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

šŸŽØ Customization

Update Website Info

Edit server.js and update the system prompt or knowledge base.

Change UI Theme

Modify the CSS in public/index.html gradient colors and styling.

Add Real API Integration

Replace mock MCP tools in server.js with real OpenAI/Claude API calls.

šŸ“ System Prompt Highlights

  • Live-first philosophy – Prioritizes current data over static knowledge
  • Hallucination prevention – Refuses to guess or invent information
  • Transparent reasoning – Cites sources and explains reasoning
  • Professional tone – Clear, concise, helpful communication
  • Safety guardrails – Rejects prompt injection and abuse

🚦 Next Steps for Production

  1. Integrate OpenAI/Claude API – Replace mock responses
  2. Add MCP server – Real connection to external tools
  3. Set up database – Store conversations and user data securely
  4. Add authentication – Protect sensitive endpoints
  5. Configure CORS – Allow cross-origin requests from your domain
  6. Enable logging – Monitor and debug in production
  7. Add rate limiting – Prevent abuse and control costs

šŸ“§ Support

For questions or issues, contact the site owner at srinivasanramanujam.sbs

šŸ“„ License

MIT License – See LICENSE file for details

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
E2B

E2B

Using MCP to run code via e2b.

Official
Featured