---
description: Use this rule when asked to test changes and functionality of the product
globs:
alwaysApply: false
---
# Testing Guidelines for MCP Task Manager
**AGENT TRIGGER**: Use this rule when encountering ANY of these scenarios:
- MCP tools not working in Cursor (connection, authentication, or tool failures)
- Railway deployment issues (server not responding, build failures, timeouts)
- Directus authentication problems (token validation, user lookup, permissions)
- API endpoint failures (create_task, list_tasks, or other tool errors)
- Performance issues (slow responses, timeouts, server errors)
- User reports functionality not working
- Need to debug integration problems between components
- Creating new functionality that needs validation
- Deployment verification after code changes
When functionality isn't working, follow this systematic testing approach to diagnose, document, and store reusable tests.
## CRITICAL FILE ORGANIZATION RULE
**šØ MANDATORY: ALL TEST FILES MUST BE CREATED IN ./tests DIRECTORY**
**ā NEVER CREATE TEST FILES IN PROJECT ROOT**
- Don't create: `test-*.js`, `*-test.js` in project root
- Don't create: Testing scripts outside ./tests directory
**ā
ALWAYS USE PROPER STRUCTURE:**
```
tests/
āāā auth/test-[feature].js
āāā api/test-[endpoint].js
āāā tools/test-[tool-name].js
āāā integration/test-[scenario].js
āāā deployment/test-[service].js
```
## Testing Philosophy
**Test-Driven Debugging**: When something breaks, create tests first to isolate the problem, then fix the issue while maintaining the test for future regression prevention.
## When to Create Tests
### Immediate Testing Scenarios
- **MCP Server Connection Issues**: API endpoints not responding
- **Directus Authentication Problems**: Token validation failures
- **Tool Functionality Breaks**: Individual MCP tools not working
- **Railway Deployment Issues**: Server not starting or responding
- **User Authentication Flows**: Multi-user token management problems
- **Data Persistence Issues**: Tasks not saving or retrieving correctly
### Testing Triggers
1. **Bug Reports**: Any functionality not working as expected
2. **Deployment Failures**: Railway deployment not completing successfully
3. **Integration Issues**: Cursor MCP integration problems
4. **Performance Problems**: Slow response times or timeouts
5. **New Feature Development**: Validate new functionality works correctly
## Testing Methodology
### 1. Isolate the Problem
- **Single Component Testing**: Test one piece at a time (auth, API call, tool function)
- **Minimal Reproduction**: Create the smallest possible test case that demonstrates the issue
- **Environment Verification**: Test in both development and production environments
### 2. Test Categories
#### API Connectivity Tests
```javascript
// Test basic server connectivity
// Test MCP-RPC protocol compliance
// Test Railway deployment health
```
#### Authentication Tests
```javascript
// Test Directus token validation
// Test different auth methods (Bearer, X-Directus-Token, query param)
// Test user lookup and permissions
```
#### Tool Functionality Tests
```javascript
// Test each MCP tool individually (create_task, list_tasks, etc.)
// Test with valid and invalid parameters
// Test error handling and edge cases
```
#### Integration Tests
```javascript
// Test Cursor MCP integration end-to-end
// Test multi-user scenarios
// Test working directory context detection
```
### 3. Test Structure Template
Every test file should follow this structure:
```javascript
/**
* Test: [Brief description of what this tests]
* Purpose: [Why this test exists and what problem it solves]
* Scope: [What components/functionality this covers]
* Environment: [Local/Production/Both]
* Dependencies: [Required services, tokens, or setup]
*
* Usage: node tests/[category]/test-[name].js
* Expected: [What successful test output should look like]
*
* Last Updated: [Date]
* Related Issues: [Link to GitHub issues if applicable]
*/
// Test implementation here...
```
## STRICT File Organization in ./tests Directory
### MANDATORY Directory Structure
```
tests/
āāā README.md # Testing documentation and index
āāā auth/ # Authentication related tests
ā āāā test-auth-bearer.js # Bearer token validation
ā āāā test-auth-directus.js # Directus auth methods
ā āāā test-token-validation.js # Token validation & permissions
ā āāā test-user-lookup.js # User information retrieval
āāā api/ # API endpoint tests
ā āāā test-create-task.js # Task creation API
ā āāā test-list-tasks.js # Task listing API
ā āāā test-search-tasks.js # Task search API
ā āāā test-update-task.js # Task update API
ā āāā test-get-task.js # Task retrieval API
ā āāā test-server-health.js # Server health checks
āāā integration/ # End-to-end integration tests
ā āāā test-cursor-mcp.js # Cursor MCP integration
ā āāā test-multi-user.js # Multi-user scenarios
ā āāā test-workflow.js # Complete task workflows
āāā deployment/ # Railway deployment tests
ā āāā test-railway-health.js # Railway server status
ā āāā test-build-process.js # Build verification
ā āāā test-environment.js # Environment variables
āāā tools/ # Individual MCP tool tests
āāā test-create-task-tool.js # create_task MCP tool
āāā test-list-tasks-tool.js # list_tasks MCP tool
āāā test-search-tasks-tool.js# search_tasks MCP tool
āāā test-update-task-tool.js # update_task MCP tool
āāā test-get-task-tool.js # get_task MCP tool
āāā test-help-tool.js # help MCP tool
```
### File Naming Convention (MANDATORY)
- **Format**: `test-[component]-[specific-feature].js`
- **Location**: `tests/[category]/test-[name].js`
- **Examples**:
- `tests/auth/test-bearer-token.js`
- `tests/api/test-create-task.js`
- `tests/tools/test-create-task-tool.js`
- `tests/integration/test-cursor-workflow.js`
### File Creation Rules
1. **Always Check Category First**: Determine which category the test belongs to
2. **Create Directory if Missing**: Ensure the category directory exists
3. **Use Descriptive Names**: Make the test purpose clear from filename
4. **Follow Naming Convention**: Stick to `test-[component]-[feature].js` format
5. **Update README**: Add new tests to the index immediately
## Documentation Requirements
### Test File Headers (MANDATORY)
Every test file MUST include this header:
```javascript
/**
* Test: [Brief description of what this tests]
* Purpose: [Why this test exists and what problem it solves]
* Scope: [What components/functionality this covers]
* Environment: [Local/Production/Both]
* Dependencies: [Required services, tokens, or setup]
*
* Usage: node tests/[category]/test-[name].js
* Expected: [What successful test output should look like]
*
* Last Updated: [Date]
* Related Issues: [Link to GitHub issues if applicable]
*/
```
### README.md in Tests Directory (MANDATORY)
Must contain:
1. **Test Index**: Complete list of all test files with descriptions
2. **Quick Start**: How to run tests by category
3. **Setup Instructions**: Required environment variables, tokens, etc.
4. **Category Descriptions**: What each directory contains
5. **Troubleshooting**: Common test failures and solutions
6. **Test Results Archive**: Links to documented test runs
## Test Execution Workflow
### Step 1: Immediate Testing (When Things Break)
```bash
# Run by category
node tests/auth/test-token-validation.js
node tests/api/test-create-task.js
node tests/tools/test-create-task-tool.js
# Run all tests in a category
for file in tests/auth/*.js; do node "$file"; done
# Document results
echo "$(date): Test results for issue #123" >> tests/test-runs.log
```
### Step 2: Create New Tests (MANDATORY PROCESS)
1. **Identify Category**: auth, api, tools, integration, or deployment
2. **Check Existing Tests**: Avoid duplication
3. **Create in Proper Directory**: `tests/[category]/test-[name].js`
4. **Add Comprehensive Header**: Follow template exactly
5. **Test Both Success and Failure**: Validate error handling
6. **Update tests/README.md**: Add to test index immediately
### Step 3: Document Test Results
- **Success**: Note which tests passed and system state
- **Failure**: Capture error messages, response codes, logs
- **Environment**: Record Node.js version, Railway deployment ID, etc.
- **Resolution**: Document what fixed the issue
- **Update Log**: Add entry to tests/test-runs.log
## Test Storage and Reusability
### Test Artifact Preservation
- **Test Scripts**: Store in appropriate category subdirectory ONLY
- **Test Data**: Include sample payloads and expected responses
- **Environment Configs**: Document required environment variables
- **Results Logs**: Keep in tests/test-runs.log
### Reusability Guidelines
- **Parameterized Tests**: Use environment variables for tokens/URLs
- **Modular Components**: Create reusable helper functions in tests/helpers/
- **Clear Dependencies**: Document all prerequisites in header
- **Version Compatibility**: Note which versions tested
## Test Automation Integration
### Git Hooks Integration
```bash
# Run critical tests before deployment
npm run test:critical
# Run specific categories
npm run test:auth
npm run test:api
npm run test:tools
# Store test results in commit message
git commit -m "Fix: Auth issue - All tests passing (tests/auth/*, tests/api/*)"
```
### Railway Deployment Testing
- **Pre-Deploy**: Run `tests/auth/` and `tests/api/` tests
- **Post-Deploy**: Run `tests/deployment/` tests
- **Health Monitoring**: Continuous `tests/deployment/test-railway-health.js`
## Emergency Testing Procedures
### When Production is Down (EXECUTE IN ORDER)
1. **Connectivity**: `node tests/deployment/test-railway-health.js`
2. **Authentication**: `node tests/auth/test-token-validation.js`
3. **Core API**: `node tests/api/test-create-task.js`
4. **MCP Tools**: `node tests/tools/test-create-task-tool.js`
5. **Document Results**: Add to tests/test-runs.log with timestamp
6. **Create Recovery Test**: If new issue found, create test in appropriate category
### When Tests Fail
1. **Isolate Category**: Run individual category directories
2. **Check Environment**: Verify tokens, URLs, network connectivity
3. **Update Tests**: Only if legitimate API changes require updates
4. **Document Changes**: Update tests/README.md with change rationale
## Test Results Documentation
### Test Run Log Format (tests/test-runs.log)
```
Date: 2024-01-15T10:30:00Z
Environment: Production Railway
Issue: MCP task creation failing
Category: auth, api, tools
Tests Run:
- tests/auth/test-token-validation.js: ā
PASS
- tests/auth/test-bearer-token.js: ā
PASS
- tests/api/test-create-task.js: ā FAIL (500 error)
- tests/tools/test-create-task-tool.js: ā FAIL (timeout)
Root Cause: Missing creator field in task creation
Resolution: Added creator field validation
Re-test Results: ā
ALL PASS
```
### Success Metrics
- **All Critical Tests Pass**: auth/, api/, tools/ categories
- **Response Times**: Under 5 seconds for tool calls
- **Error Handling**: Graceful failure with helpful messages
- **Multi-User**: Different tokens work across tests
## QUALITY ASSURANCE CHECKLIST
Before creating any test, verify:
- [ ] Test file is in appropriate tests/[category]/ directory
- [ ] Filename follows test-[component]-[feature].js convention
- [ ] Header comment is complete and accurate
- [ ] Test purpose is clearly documented
- [ ] Usage instruction shows correct path: tests/[category]/test-[name].js
- [ ] Test is added to tests/README.md index
- [ ] Test includes both success and failure scenarios
- [ ] Test cleans up any resources it creates
- [ ] Test can be run independently
- [ ] Test results are clearly interpretable
This testing framework ensures systematic diagnosis, proper organization, and comprehensive documentation while preventing test file clutter in the project root.