# π¨ AI Development Rules: Critical Guidelines
**Based on Real-World Disasters and Recoveries**
**Project**: nest-protect MCP Server Development
**Timeline**: September 2025
**WARNING**: These rules prevent catastrophic AI-generated "solutions" that destroy working functionality.
---
## π₯ RULE #1: NEVER REMOVE FUNCTIONALITY TO "FIX" PROBLEMS
### **The Disaster Pattern**
**What AI Tools Often Suggest**:
- β "Let's simplify this to get it working"
- β "Replace real API calls with mocks for now"
- β "Delete the complex tools folder and put everything in one file"
- β "Use placeholder functions until we debug the imports"
**Real Examples of Disasters**:
#### **The "Mock Replacement Disaster"**
```python
# β DISASTER: AI suggestion to "fix" import errors
# "Let's replace the real tools with simple mocks to get the server loading"
@app.tool()
async def list_devices() -> Dict[str, Any]:
return {"devices": ["mock-device-1", "mock-device-2"]} # β DESTROYED real functionality
@app.tool()
async def get_device_status(device_id: str) -> Dict[str, Any]:
return {"status": "mock-status", "battery": 100} # β DESTROYED real API integration
```
**Result**: Half the tools replaced with useless mocks, real Nest API integration destroyed.
#### **The "Simplification Catastrophe"**
```python
# β DISASTER: AI suggestion to "clean up the architecture"
# "Let's put all tools in the main server file to avoid import issues"
# Deleted entire tools/ folder structure
# Replaced 24 real tools with 20 mock functions
# Destroyed months of real API development work
```
**Result**: Complete project destruction requiring Git recovery.
### **Why This Happens**
**AI Logic Flaws**:
1. **Path of least resistance**: Mocks always "work" and seem to solve immediate problems
2. **Build-first mentality**: Prioritizes "getting it running" over preserving functionality
3. **Import error avoidance**: Removing real dependencies eliminates import errors
4. **Complexity fear**: Real API integration looks "complicated" compared to mocks
**Human Oversight Failures**:
1. **Pressure to fix**: When things are broken, any "solution" looks tempting
2. **Trust in AI**: Assuming AI understands the value of existing functionality
3. **Build tunnel vision**: Focusing on "make it run" instead of "preserve what works"
---
## π‘οΈ RULE #2: REAL API IMPLEMENTATION ONLY - NO MOCKS
### **The "Real Implementation First" Principle**
**ABSOLUTE REQUIREMENT**: All tools must use **real API calls** to actual services/devices, not mock data.
**β
ACCEPTABLE**:
```python
@app.tool()
async def get_device_status(device_id: str) -> Dict[str, Any]:
"""Get real-time status from actual Nest device."""
try:
state = get_app_state()
if not state.access_token:
return {"error": "Authentication required", "requires": "oauth_setup"}
# β
REAL API CALL to Google Smart Device Management API
async with aiohttp.ClientSession() as session:
headers = {"Authorization": f"Bearer {state.access_token}"}
url = f"https://smartdevicemanagement.googleapis.com/v1/{device_id}"
async with session.get(url, headers=headers) as response:
if response.status == 200:
data = await response.json()
return {"success": True, "device_data": data}
else:
return {"error": f"API error: {response.status}"}
except Exception as e:
return {"error": f"Real API call failed: {e}"}
```
**β NEVER ACCEPTABLE**:
```python
@app.tool()
async def get_device_status(device_id: str) -> Dict[str, Any]:
"""Get device status."""
# β MOCK DATA - ABSOLUTELY FORBIDDEN
return {
"battery": 85,
"status": "online",
"last_test": "2025-09-01",
"mock": True # β This destroys all value
}
```
### **Handling Legitimate API Unavailability**
**The ONLY exceptions** for non-real implementations:
#### **Exception 1: New Device Type (API Doesn't Exist Yet)**
```python
@app.tool(name="get_nest_thermostat_status_(mock)") # β
Clear (mock) in name
async def get_nest_thermostat_status_mock(device_id: str) -> Dict[str, Any]:
"""
Get Nest Thermostat status.
β οΈ MOCK IMPLEMENTATION: Real Nest Thermostat API integration pending.
This will be replaced with real Google Smart Device Management API calls
when Thermostat support is added to this MCP server.
Planned implementation: Q1 2026
"""
return {
"error": "Not implemented yet",
"status": "mock_placeholder",
"message": "Nest Thermostat support coming Q1 2026",
"use_instead": ["list_devices", "get_device_status"] # β
Point to real tools
}
```
#### **Exception 2: External API Temporarily Down**
```python
@app.tool()
async def get_device_status(device_id: str) -> Dict[str, Any]:
"""Get real-time status from actual Nest device."""
try:
# β
ALWAYS attempt real API call first
result = await make_real_nest_api_call(device_id)
return {"success": True, "data": result}
except APIUnavailableError as e:
# β
Real API failure, not a mock
return {
"error": "Google Smart Device Management API temporarily unavailable",
"details": str(e),
"retry_in": "5-10 minutes",
"last_known_status": await get_cached_status(device_id), # β
Real cached data
"note": "This is real cached data, not mock data"
}
except Exception as e:
return {"error": f"Real API call failed: {e}"}
```
#### **Exception 3: Development/Testing Environment**
```python
@app.tool()
async def get_device_status(device_id: str) -> Dict[str, Any]:
"""Get real-time status from actual Nest device."""
# β
Environment detection
if os.getenv("NEST_DEVELOPMENT_MODE") == "true":
return {
"error": "Development mode - no real devices configured",
"message": "Set up real Google Cloud credentials to access actual devices",
"setup_instructions": [
"1. Create Google Cloud Project",
"2. Enable Smart Device Management API",
"3. Set NEST_CLIENT_ID and NEST_CLIENT_SECRET",
"4. Run initiate_oauth_flow tool"
],
"development_mode": True # β
Clear this is environment issue
}
# β
Always attempt real API for production
return await make_real_nest_api_call(device_id)
```
### **MANDATORY Requirements for Any Non-Real Implementation**
**1. Tool Name Must Include "(mock)"**:
```python
# β
CORRECT
@app.tool(name="get_nest_camera_feed_(mock)")
# β WRONG
@app.tool(name="get_nest_camera_feed") # Looks real but isn't
```
**2. Documentation Must Be Crystal Clear**:
```python
async def placeholder_tool() -> Dict[str, Any]:
"""
Tool description.
β οΈ MOCK IMPLEMENTATION: This is not connected to real devices.
Real implementation planned for [specific date/milestone].
Reason for mock: [specific technical reason]
"""
```
**3. Response Must Indicate Mock Status**:
```python
return {
"error": "Not implemented yet",
"status": "mock_placeholder",
"implementation_status": "planned_for_q1_2026",
"reason": "Google Smart Device Management API doesn't support this device type yet"
}
```
**4. Must Point to Real Alternatives**:
```python
return {
"error": "Feature not available",
"use_instead": ["list_devices", "get_device_status"], # β
Point to working real tools
"real_alternatives": "Use get_device_status for current device information"
}
```
## π― RULE #3: AVOID TOOL INFLATION - KEEP TOOLS COHESIVE
### **The Tool Inflation Problem**
**AI Tendency**: Create many tiny, atomic tools that fragment natural workflows
**Example of Tool Inflation (BAD)**:
```python
# β WRONG: 8 fragmented tools for what should be 2-3 cohesive tools
@app.tool()
async def connect_to_firefox_db() -> Dict[str, Any]:
"""Connect to Firefox SQLite database."""
@app.tool()
async def get_bookmark_table_schema() -> Dict[str, Any]:
"""Get bookmark table structure."""
@app.tool()
async def list_bookmark_folders() -> Dict[str, Any]:
"""List all bookmark folders."""
@app.tool()
async def search_bookmarks_by_title() -> Dict[str, Any]:
"""Search bookmarks by title only."""
@app.tool()
async def search_bookmarks_by_url() -> Dict[str, Any]:
"""Search bookmarks by URL only."""
@app.tool()
async def get_bookmark_by_id() -> Dict[str, Any]:
"""Get single bookmark by ID."""
@app.tool()
async def count_total_bookmarks() -> Dict[str, Any]:
"""Count all bookmarks."""
@app.tool()
async def close_firefox_db() -> Dict[str, Any]:
"""Close database connection."""
```
**β
CORRECT: 3 cohesive tools that handle complete workflows**:
```python
@app.tool()
async def search_bookmarks(
query: str = "",
search_type: str = "all", # title, url, tags, all
folder: str = "",
limit: int = 50
) -> Dict[str, Any]:
"""
Search Firefox bookmarks with comprehensive filtering.
Handles: title search, URL search, tag search, folder filtering
Returns: complete bookmark data with metadata
"""
# Single tool handles all search scenarios
# Includes connection management, query execution, result formatting
@app.tool()
async def manage_bookmarks(
action: str, # add, delete, update, move
bookmark_data: Dict[str, Any]
) -> Dict[str, Any]:
"""
Complete bookmark management operations.
Handles: creating, deleting, updating, organizing bookmarks
"""
# Single tool for all CRUD operations
@app.tool()
async def get_bookmark_overview() -> Dict[str, Any]:
"""
Get comprehensive bookmark statistics and folder structure.
Returns: total counts, folder hierarchy, recent additions, popular tags
"""
# Single tool for dashboard-style overview
```
### **Principles for Cohesive Tool Design**
#### **1. Follow Natural User Workflows**
**Think**: "What does a user actually want to accomplish?"
**β Fragmented thinking**:
- "Step 1: Connect to database"
- "Step 2: Query bookmark table"
- "Step 3: Format results"
- "Step 4: Close connection"
**β
Workflow thinking**:
- "Find bookmarks matching my search criteria"
- "Organize my bookmark collection"
- "Get overview of my bookmarks"
#### **2. Tools Should Be Task-Complete**
**Each tool should accomplish a complete user task**, not a technical step.
**β Technical steps (BAD)**:
```python
@app.tool()
async def authenticate_nest_api() -> Dict[str, Any]:
"""Just handle authentication."""
@app.tool()
async def fetch_device_list() -> Dict[str, Any]:
"""Just get raw device list."""
@app.tool()
async def parse_device_data() -> Dict[str, Any]:
"""Just parse the response."""
```
**β
Complete tasks (GOOD)**:
```python
@app.tool()
async def list_devices() -> Dict[str, Any]:
"""
Get all Nest Protect devices with complete information.
Handles: authentication, API calls, data parsing, error handling
Returns: user-ready device information
"""
# Complete workflow in one tool
```
#### **3. Use Rich Parameters Instead of Multiple Tools**
**β Tool proliferation**:
```python
@app.tool()
async def get_device_battery_status() -> Dict[str, Any]: pass
@app.tool()
async def get_device_network_status() -> Dict[str, Any]: pass
@app.tool()
async def get_device_sensor_status() -> Dict[str, Any]: pass
@app.tool()
async def get_device_all_status() -> Dict[str, Any]: pass
```
**β
Rich parameterization**:
```python
@app.tool()
async def get_device_status(
device_id: str,
details: List[str] = ["all"] # battery, network, sensors, maintenance, all
) -> Dict[str, Any]:
"""
Get comprehensive device status with configurable detail levels.
Returns exactly what the user needs in one call.
"""
```
### **Guidelines for Right-Sized Tools**
#### **Size Guidelines**
**Too Small (Avoid)**:
- Single database operations
- Individual API calls without context
- Technical steps that aren't user goals
- Configuration fragments
**Just Right (Target)**:
- Complete user workflows
- Natural task boundaries
- Self-contained operations
- Meaningful atomic units
**Too Large (Also Avoid)**:
- Multiple unrelated operations
- Everything-in-one kitchen sink tools
- Mixed domain responsibilities
#### **The "Can I Describe This to My Grandmother?" Test**
**Good tool**: "Find my bookmarks about cooking"
**Bad tool**: "Execute SQL query on bookmarks table with JOIN on folders"
**Good tool**: "Check my smoke detector status"
**Bad tool**: "Parse JSON response from device status endpoint"
### **Real Examples from Our Project**
#### **β
What We Did Right**
**Device Status Tools** (3 tools, not 10):
- `list_devices` - Complete device discovery with full metadata
- `get_device_status` - Comprehensive status for one device
- `get_device_events` - Historical events with filtering
**Device Control Tools** (5 tools, not 15):
- `hush_alarm` - Complete alarm silencing workflow
- `run_safety_check` - Full safety test execution
- `set_led_brightness` - LED control with validation
- `sound_alarm` - Comprehensive alarm testing (not separate tools for each alarm type)
- `arm_disarm_security` - Complete security system control
#### **β What We Could Have Done Wrong**
**If we had fallen into tool inflation**:
```python
# β BAD: 15+ fragmented tools
@app.tool() async def connect_to_nest_api(): pass
@app.tool() async def validate_nest_token(): pass
@app.tool() async def refresh_nest_token(): pass
@app.tool() async def get_raw_device_list(): pass
@app.tool() async def parse_device_response(): pass
@app.tool() async def filter_protect_devices(): pass
@app.tool() async def format_device_display(): pass
@app.tool() async def check_device_online(): pass
@app.tool() async def get_device_battery(): pass
@app.tool() async def get_device_wifi(): pass
@app.tool() async def get_device_sensors(): pass
@app.tool() async def combine_device_status(): pass
# ... and many more tiny fragments
```
### **Common AI Tool Inflation Patterns**
#### **Pattern 1: Database Operation Fragmentation**
**β AI suggests**:
- `connect_to_db`
- `execute_query`
- `fetch_results`
- `close_connection`
**β
Better approach**:
- `search_bookmarks` (handles complete workflow)
#### **Pattern 2: API Call Decomposition**
**β AI suggests**:
- `authenticate_api`
- `make_api_call`
- `parse_response`
- `handle_errors`
**β
Better approach**:
- `get_device_status` (complete operation)
#### **Pattern 3: Data Processing Fragmentation**
**β AI suggests**:
- `get_raw_data`
- `validate_data`
- `transform_data`
- `format_output`
**β
Better approach**:
- `get_processed_results` (user-ready output)
### **How to Push Back Against Tool Inflation**
#### **When AI Suggests Too Many Tools**
**β AI says**: "Let's create separate tools for each step of the process"
**β
You say**: "Combine these into cohesive workflows that accomplish complete user tasks"
**β AI says**: "We need a tool to connect, a tool to query, a tool to format..."
**β
You say**: "Create one tool that does the complete operation the user actually wants"
#### **Questions to Ask**
1. **"Would a user really want to do just this step?"**
2. **"Does this tool accomplish a complete task?"**
3. **"Can I explain this tool's purpose without mentioning technical implementation?"**
4. **"Would I need 3+ of these tools to accomplish what I actually want?"**
### **Tool Count Guidelines by Project Size**
#### **Small Project (Simple Domain)**
- **Target**: 5-10 tools total
- **Focus**: Core workflows only
- **Example**: Personal task manager
#### **Medium Project (Our FastMCP)**
- **Target**: 15-25 tools total
- **Focus**: Complete feature coverage with workflow grouping
- **Example**: Smart home device control
#### **Large Project (Enterprise)**
- **Target**: 30-50 tools total
- **Focus**: Multiple domains with clear boundaries
- **Example**: Complete business automation platform
#### **π¨ Warning Signs of Tool Inflation**
- More than 50 tools in a single MCP server
- Tools that only make sense when used together
- Tool names that include "step 1", "part A", etc.
- Tools that just wrap single API calls
- Users need to chain 5+ tools for basic tasks
## π‘οΈ RULE #4: PRESERVE REAL FUNCTIONALITY AT ALL COSTS
### **The Correct Approach**
**When Facing Build/Import Errors**:
#### **β
DO: Fix the Root Cause**
```python
# Problem: ImportError in tool functions
# β DON'T: Replace with mocks
# β
DO: Fix the import
# Before (broken)
@app.tool()
async def get_device_status():
from missing_module import api_call # β Import error
return await api_call()
# After (fixed)
import aiohttp # β
Move import to module level
from .state_manager import get_app_state # β
Fix import path
@app.tool()
async def get_device_status():
state = get_app_state()
async with aiohttp.ClientSession() as session:
# β
Real implementation preserved and fixed
return await make_real_api_call(session, state.access_token)
```
#### **β
DO: Add Error Handling, Don't Remove Functionality**
```python
# Problem: API calls failing during development
# β DON'T: Replace with mocks
# β
DO: Add graceful error handling
@app.tool()
async def get_device_status(device_id: str) -> Dict[str, Any]:
try:
# β
Keep the real implementation
state = get_app_state()
if not state.access_token:
return {"error": "Not authenticated", "requires": "oauth_setup"}
result = await real_nest_api_call(device_id, state.access_token)
return {"success": True, "data": result}
except ImportError as e:
return {"error": f"Missing dependency: {e}", "requires": "pip install {missing_package}"}
except AuthenticationError:
return {"error": "Authentication expired", "requires": "token_refresh"}
except Exception as e:
logger.error(f"API call failed: {e}")
return {"error": "API temporarily unavailable", "details": str(e)}
```
### **Error Handling vs. Functionality Removal**
| Situation | β Wrong Approach | β
Right Approach |
|-----------|------------------|------------------|
| **Import Error** | Replace with mock | Fix import path, add to requirements |
| **API Failure** | Return fake data | Return error with helpful message |
| **Auth Issues** | Skip authentication | Return "authentication required" error |
| **Complex Logic** | Simplify to basic version | Add error handling, preserve logic |
| **Dependencies** | Remove dependency usage | Install dependency, add fallback |
---
## π§ RULE #3: ALWAYS PRESERVE PROJECT STRUCTURE
### **The Structure Destruction Pattern**
**AI Often Suggests**:
- β "Let's put everything in one file to avoid import issues"
- β "Delete the tools folder and inline everything"
- β "Simplify the architecture by removing modules"
**Why This is Catastrophic**:
- π₯ **Destroys organization** that took months to develop
- π₯ **Eliminates modularity** and maintainability
- π₯ **Breaks testing** and development workflows
- π₯ **Makes rollback impossible** without Git recovery
### **β
Correct Approach: Fix Imports, Preserve Structure**
```python
# β WRONG: "Let's inline all tools to avoid import issues"
# This destroys the entire tools/ folder structure
# β
RIGHT: Fix the import system
# File: src/nest_protect_mcp/fastmcp_server.py
from .tools.device_status import list_devices as device_list_func
from .tools.device_control import hush_alarm as device_hush_func
from .tools.auth_tools import initiate_oauth_flow as oauth_start_func
# ... preserve all real implementations
@app.tool()
async def list_devices() -> Dict[str, Any]:
return await device_list_func() # β
Real function preserved
@app.tool()
async def hush_alarm(device_id: str) -> Dict[str, Any]:
return await device_hush_func(device_id) # β
Real function preserved
```
---
## π¨ RULE #7: NEVER TRUST "TEMPORARY" MOCKS
### **The Mock Creep Problem**
**How It Starts**:
```python
# "Let's add a temporary mock to test the server startup"
@app.tool()
async def test_tool() -> Dict[str, Any]:
return {"status": "mock"} # β "Temporary" mock
```
**How It Spreads**:
```python
# "Let's add a few more mocks to test tool discovery"
@app.tool()
async def mock_device_list() -> Dict[str, Any]:
return {"devices": ["mock1", "mock2"]} # β Mock spreading
@app.tool()
async def mock_device_status() -> Dict[str, Any]:
return {"battery": 100, "status": "ok"} # β More mocks
```
**The Final Disaster**:
```python
# "Let's replace all the broken tools with working mocks"
# Result: 20+ mock tools, zero real functionality
# Months of API integration work destroyed
```
### **β
The Correct Testing Approach**
```python
# β
CORRECT: Real tools with development-friendly error handling
@app.tool()
async def list_devices() -> Dict[str, Any]:
"""Real implementation with helpful development errors."""
try:
state = get_app_state()
if not state.access_token:
return {
"error": "Development setup required",
"instructions": [
"1. Run 'initiate_oauth_flow' tool",
"2. Complete OAuth setup",
"3. Retry this tool"
],
"development_mode": True
}
# β
Real API call preserved
return await make_real_nest_api_call(state)
except Exception as e:
return {
"error": f"Real API call failed: {e}",
"development_note": "This is a real implementation, not a mock",
"next_steps": ["Check authentication", "Verify API credentials"]
}
```
---
## π‘οΈ RULE #5: PLATFORM-SPECIFIC SYNTAX ONLY
### **The Linux Syntax Disaster**
**Platform Context**: This repository runs on Windows with PowerShell, not Linux/bash.
**β ABSOLUTELY FORBIDDEN**: Using Linux/bash syntax in PowerShell scripts or chat windows
**Common Linux Syntax Mistakes**:
- β `&&` (command chaining)
- β `|` (pipes)
- β `$(command)` (command substitution)
- β `#!/bin/bash` shebangs
- β Linux-style path separators `/` in Windows contexts
**β
CORRECT PowerShell Syntax**:
```powershell
# β WRONG - Linux syntax in PowerShell
python -m pytest && python -m black src
# β
RIGHT - PowerShell syntax
python -m pytest
python -m black src
# β WRONG - Linux command chaining
pip install -e . && python dev.py test
# β
RIGHT - Separate PowerShell commands
pip install -e .
python dev.py test
```
**Why This Matters**:
- π₯ **Breaks PowerShell execution** - `&&` is not valid PowerShell syntax
- π₯ **Causes command failures** - Users can't copy/paste your commands
- π₯ **Platform confusion** - This is a Windows project, not Linux
- π₯ **Poor user experience** - Commands don't work as written
**Examples of Linux Syntax Contamination**:
```bash
# β WRONG - These will FAIL in PowerShell
python -m pytest && python -m black src
pip install -e . && python dev.py test
echo "test" && python -c "print('hello')"
# β
RIGHT - These work in PowerShell
python -m pytest
python -m black src
pip install -e .
python dev.py test
echo "test"
python -c "print('hello')"
```
**Chat Window Rules**:
- π **NEVER use `&&`** in chat responses
- π **Use separate lines** for separate commands
- π **Test commands** before suggesting them
- π **Specify PowerShell** when relevant
**Command Formatting Guidelines**:
```powershell
# β
GOOD: Multi-line format
python -m pytest src/
python -m black src/
# β
GOOD: Clear separation
First run the tests:
python -m pytest
Then format the code:
python -m black src/
# β BAD: Linux-style chaining
python -m pytest && python -m black src/
```
**Repository Rulebook Addition**:
- All development commands must use PowerShell-compatible syntax
- No Linux/bash syntax allowed in any documentation or code
- Chat responses must use proper PowerShell command formatting
- Test all suggested commands on Windows before sharing
---
## π‘οΈ RULE #6: GITHUB IS YOUR SAFETY NET
**Before Major Changes**:
```bash
# Always commit working state before AI suggestions
git add .
git commit -m "Working state before AI refactoring - has real functionality"
git branch backup-before-ai-changes
```
**After Disaster Detection**:
```bash
# Immediate recovery from AI disasters
git status # See what was destroyed
git checkout -- . # Recover working files
git checkout backup-before-ai-changes # Nuclear recovery option
```
**Daily Backup Strategy**:
```bash
# End of each development session
git add .
git commit -m "End of day - real functionality preserved"
git push origin main
```
### **Disaster Detection Checklist**
**π¨ IMMEDIATE RED FLAGS**:
- [ ] AI suggests "replacing with mocks for now"
- [ ] AI wants to "simplify the architecture"
- [ ] AI proposes "inlining everything to one file"
- [ ] AI suggests "removing complex dependencies"
- [ ] You see `return {"mock": "data"}` in tool functions
**β
RECOVERY ACTION**:
1. **STOP immediately**
2. **Check git status**
3. **Restore from backup if ANY real functionality was removed**
4. **Fix the root cause instead of accepting simplification**
---
## π― RULE #6: REAL FUNCTIONALITY ALWAYS WINS
### **The Value Hierarchy**
**Priority Order (Never Compromise)**:
1. β
**Real API integration** > Mock responses
2. β
**Actual device control** > Placeholder functions
3. β
**Production error handling** > "TODO: implement"
4. β
**Comprehensive tools** > Simplified versions
5. β
**Modular architecture** > Monolithic files
### **Development Mantras**
**When AI Suggests Simplification**:
- π **"NO MOCKS"** - Fix the problem, don't fake the solution
- π **"REAL API ONLY"** - Connect to actual services/devices, not test data
- π **"NO DELETIONS"** - Add functionality, never remove it
- π **"NO INLINING"** - Preserve project structure
- π **"NO PLACEHOLDERS"** - Implement real solutions or clearly mark as "(mock)"
- π **"NO TOOL INFLATION"** - Create cohesive workflows, not atomic fragments
**When Facing Complex Problems**:
- β
**"FIX THE ROOT CAUSE"** - Address import/dependency issues directly
- β
**"PRESERVE AND ENHANCE"** - Add error handling, keep functionality
- β
**"REAL IMPLEMENTATION ONLY"** - No shortcuts or temporary measures
---
## π Lessons from Our FastMCP Success
### **What Made Our Project Successful**
**We NEVER compromised on**:
- β
**Real Nest API integration** (zero mocks in final system)
- β
**Actual Google Smart Device Management API calls** (not fake data)
- β
**Real OAuth 2.0 authentication flow** (not hardcoded tokens)
- β
**Genuine device control** (actual alarm silencing, LED control, etc.)
- β
**Comprehensive tool set** (24 real tools, not placeholders)
- β
**Production error handling** (graceful degradation with real error messages)
- β
**Modular architecture** (tools/ folder preserved)
**When we hit problems, we**:
- β
**Fixed import errors** instead of removing imports
- β
**Added state management** instead of removing state
- β
**Enhanced error handling** instead of removing complexity
- β
**Preserved all 24 tools** throughout the debugging process
### **The Result**
**From our 3-day debugging journey**:
- π **24 working tools** with real API integration
- π **Production-ready code** with comprehensive error handling
- π **Maintainable architecture** with proper module organization
- π **Zero mocks** in the final system
**This was only possible because we NEVER accepted functionality removal as a "solution".**
---
## π― Quick Reference: AI Suggestion Evaluation
### **β
ACCEPT These Suggestions**
- "Let's fix this import error by updating the path"
- "Add error handling for this API call failure"
- "Install the missing dependency"
- "Update the authentication flow to handle expiration"
- "Add logging to help debug the real issue"
- "Make a real HTTP request to the actual API"
- "Use the official SDK for this service"
### **π¨ REJECT These Suggestions Immediately**
- "Let's replace this with a mock for now"
- "Use fake data to test the tool"
- "Return hardcoded values until the API works"
- "Create separate tools for each step"
- "Make individual tools for each database operation"
- "Split this into smaller, more focused tools"
- "Simplify by removing the complex logic"
- "Put everything in one file to avoid imports"
- "Use placeholder data until we fix the API"
- "Delete the tools folder and inline everything"
### **π DANGER PHRASES**
- "temporary mock"
- "fake data for testing"
- "hardcoded response"
- "separate tools for each step"
- "break this down into smaller tools"
- "one tool per operation"
- "placeholder until"
- "mock the response"
- "simplify the architecture"
- "remove for now"
- "inline to fix imports"
**Remember**: AI tools are optimized for "getting something running" rather than "preserving valuable functionality." Your job is to maintain the quality and completeness of the real implementation while leveraging AI to fix specific technical issues.
**NEVER let AI solutions destroy months of real development work!** π₯π‘οΈ