# Demo Scripts - Quick Reference
## š Quick Start
```bash
./ask.sh # Conversational mode (recommended)
./ask.sh --simple # Simple mode (no memory)
```
---
## š Available Demo Scripts
### 1. **conversational_demo.py** - WITH Memory š¬ (RECOMMENDED)
**Best for:** Natural conversations, follow-up questions
```bash
python scripts/conversational_demo.py
```
**Features:**
- ā
Remembers conversation history
- ā
Handles "it", "them", "that one" references
- ā
Perfect for drilling down on topics
- ā
Commands: `history`, `clear`, `help`, `stats`
**Example:**
```
You: What OLED TVs are available?
AI: We have OLED in sizes 42", 48", 55"...
You: Which is cheapest? ā "Which" refers to OLED TVs
AI: The 42" model at $899.99
You: How many in stock? ā "How many" refers to 42" OLED
AI: 201 units across 3 warehouses
```
---
### 2. **interactive_demo.py** - No Memory
**Best for:** Exploring different unrelated topics
```bash
python scripts/interactive_demo.py
# Or single question:
python scripts/interactive_demo.py --query "What OLED TVs are available?"
```
**Features:**
- ā
Fast and simple
- ā
Each question is independent
- ā
Commands: `help`, `stats`
- ā No conversation memory
---
### 3. **run_demo.py** - Basic Demo
**Best for:** Testing the system
```bash
python scripts/run_demo.py
```
**Features:**
- ā
Runs 2 predefined queries
- ā
Shows retrieval process
- ā No interactivity
---
## š® Commands (in interactive/conversational modes)
| Command | Conversational | Interactive | Description |
|---------|---------------|-------------|-------------|
| `help` | ā
| ā
| Show example questions |
| `stats` | ā
| ā
| System statistics |
| `history` | ā
| ā | Show conversation history |
| `clear` | ā
| ā | Clear conversation memory |
| `exit`, `quit` | ā
| ā
| Exit the program |
---
## š Comparison
| Feature | Conversational | Interactive | Basic |
|---------|---------------|-------------|-------|
| **Conversation Memory** | ā
| ā | ā |
| **Follow-up Questions** | ā
| ā | ā |
| **Multiple Questions** | ā
| ā
| ā (only 2) |
| **Custom Questions** | ā
| ā
| ā |
| **Conversation History** | ā
| ā | ā |
| **Speed** | Medium | Fast | Fast |
| **Best For** | Conversations | Exploration | Testing |
---
## š” Which One Should I Use?
### Use **Conversational Mode** when:
- ā
Asking follow-up questions
- ā
Having a conversation about a topic
- ā
Drilling down into details
- ā
Want natural back-and-forth
### Use **Interactive Mode** when:
- ā
Asking unrelated questions
- ā
Want maximum speed
- ā
Don't need context between questions
### Use **Basic Demo** when:
- ā
Just testing if system works
- ā
Learning how the code works
---
## š§ Common Usage Patterns
### Pattern 1: Topic Investigation (Use Conversational)
```bash
python scripts/conversational_demo.py
You: What products have warranty issues?
You: Tell me more about those issues
You: Which supplier is responsible?
You: Show me their quality ratings
You: What are our alternatives?
```
### Pattern 2: Quick Lookups (Use Interactive)
```bash
python scripts/interactive_demo.py
You: What OLED TVs are available?
You: What's in Warehouse-East?
You: Show me November sales
# Each question is independent
```
### Pattern 3: Single Question (Use Interactive with --query)
```bash
python scripts/interactive_demo.py --query "What products are low in stock?"
```
---
## šÆ Example Sessions
### Conversational Session (Natural Flow)
```
./ask.sh
ā What OLED TVs do we have?
š” OLED sizes: 42", 48", 55", 65", 77", 83"
ā Price range?
š” From $899 (42") to $3,499 (83")
ā Best seller?
š” The 55" model with 400+ units sold in November
ā Any quality issues?
š” 12 warranty claims, mostly dead pixels in Q4 batch
ā history
š Shows all 4 Q&A pairs
ā clear
š Conversation reset
ā What about LCD TVs?
š” Fresh conversation about LCD...
```
### Interactive Session (Independent Questions)
```
python scripts/interactive_demo.py
ā What OLED TVs are available?
š” OLED sizes: 42", 48", 55"...
ā Show shipping delays
š” 15 shipments have delays...
ā Customer feedback on audio
š” Soundbar ratings average 4.2/5...
# Each answer is independent
```
---
## āļø Customization
### Change Models
Edit `config/config.yaml`:
```yaml
ollama:
llm_model: "llama3.1:latest" # Change LLM
embedding_model: "nomic-embed-text" # Change embeddings
```
### Change Retrieval Settings
```yaml
retrieval:
vector_search_k: 5 # More results = more context
keyword_search_k: 5
csv_weight: 0.4 # Adjust CSV vs text balance
text_weight: 0.6
```
---
## š Troubleshooting
### "Connection refused to Ollama"
```bash
# Start Ollama
ollama serve
# Verify
curl http://localhost:11434
```
### "No module named 'hybrid_rag'"
```bash
source .venv/bin/activate
pip install -e .
```
### Conversation getting confused
```
# In conversational mode:
type: clear
# Starts fresh
```
### Slow responses
```bash
# Use interactive mode instead (no history overhead)
./ask.sh --simple
```
---
## š Learn More
- **CONVERSATION_MEMORY.md** - Deep dive into how memory works
- **QUICK_START.md** - Complete usage guide
- **ARCHITECTURE.md** - Technical details
- **USAGE_COMPARISON.md** - Detailed comparison
---
## š Learning Path
### Day 1: Get Started
```bash
./ask.sh
# Ask questions, explore your data
```
### Day 2: Understand Memory
```bash
# Compare both modes
python scripts/conversational_demo.py
python scripts/interactive_demo.py --simple
```
### Day 3: Customize
```bash
# Edit config/config.yaml
# Tune retrieval parameters
# Run boundary tests
```
---
## ā” Quick Commands
```bash
# Easiest - conversational with memory
./ask.sh
# Simple mode - no memory
./ask.sh --simple
# Single question
python scripts/interactive_demo.py --query "Your question"
# Test system
python scripts/run_demo.py
# Performance test
python scripts/boundary_testing.py
```
---
**Bottom Line:**
New users ā **`./ask.sh`** (conversational mode)
Power users ā Choose based on task:
- Conversation = `conversational_demo.py`
- Quick lookups = `interactive_demo.py`
- Testing = `run_demo.py`