# MCP Website Chatbot
A production-grade AI chatbot for srinivasanramanujam.sbs with live data retrieval via MCP (Model Context Protocol) and RAG (Retrieval-Augmented Generation).
## π Features
- **Live Data Integration** β MCP tools for real-time information retrieval
- **RAG Support** β Static knowledge base from website content, blogs, and FAQs
- **Hallucination Prevention** β Strict guardrails against fabrication and misinformation
- **Beautiful UI** β Modern, responsive chat interface
- **Production-Ready** β Scalable backend with proper error handling
- **Health Monitoring** β Built-in health checks and uptime tracking
## π Requirements
- Node.js 16+
- npm or yarn
- OpenAI API key (for production use)
## π οΈ Installation
```bash
# Install dependencies
npm install
# Create .env file
cat > .env << EOF
PORT=3000
OPENAI_API_KEY=your_key_here
EOF
# Start the server
npm run dev
```
## π Project Structure
```
βββ server.js # Express server with chat API
βββ public/
β βββ index.html # Chat UI
βββ system_prompt.txt # System prompt for the chatbot
βββ package.json # Dependencies
```
## π API Endpoints
### POST /api/chat
Send a message and get a response.
**Request:**
```json
{
"message": "What's new on the website?",
"conversationHistory": []
}
```
**Response:**
```json
{
"success": true,
"message": "Response text...",
"context": {
"requiresLiveData": true,
"toolsUsed": ["fetchLiveData"],
"timestamp": "2026-01-12T10:30:00Z"
}
}
```
### GET /api/health
Check server health.
**Response:**
```json
{
"status": "healthy",
"timestamp": "2026-01-12T10:30:00Z",
"uptime": 3600
}
```
### GET /api/system-prompt
Retrieve the system prompt (for debugging).
## π― How It Works
1. **User sends a message** via the chat UI
2. **Server analyzes** if live data is needed (time-sensitive, external sources)
3. **MCP tools are invoked** if necessary to fetch real-time data
4. **Response is generated** using the system prompt guidelines
5. **Assistant responds** with proper citations and source attribution
## π Security Features
- β
No system prompt exposure to users
- β
Input validation and sanitization
- β
Rate limiting ready (add middleware as needed)
- β
Error handling without leaking internal details
- β
CORS headers (add if deploying to production)
## π Deployment
### Option 1: Vercel (Recommended)
```bash
npm install -g vercel
vercel
```
### Option 2: Heroku
```bash
heroku create your-app-name
git push heroku main
```
### Option 3: Docker
Create a `Dockerfile`:
```dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
```
## π¨ Customization
### Update Website Info
Edit `server.js` and update the system prompt or knowledge base.
### Change UI Theme
Modify the CSS in `public/index.html` gradient colors and styling.
### Add Real API Integration
Replace mock MCP tools in `server.js` with real OpenAI/Claude API calls.
## π System Prompt Highlights
- **Live-first philosophy** β Prioritizes current data over static knowledge
- **Hallucination prevention** β Refuses to guess or invent information
- **Transparent reasoning** β Cites sources and explains reasoning
- **Professional tone** β Clear, concise, helpful communication
- **Safety guardrails** β Rejects prompt injection and abuse
## π¦ Next Steps for Production
1. **Integrate OpenAI/Claude API** β Replace mock responses
2. **Add MCP server** β Real connection to external tools
3. **Set up database** β Store conversations and user data securely
4. **Add authentication** β Protect sensitive endpoints
5. **Configure CORS** β Allow cross-origin requests from your domain
6. **Enable logging** β Monitor and debug in production
7. **Add rate limiting** β Prevent abuse and control costs
## π§ Support
For questions or issues, contact the site owner at srinivasanramanujam.sbs
## π License
MIT License β See LICENSE file for details