README.mdβ’10.1 kB
# AI Validation Integration Tests
Comprehensive integration tests for AI workflow validation introduced in v2.17.0.
## Overview
These tests validate ALL AI validation operations against a REAL n8n instance. They verify:
- AI Agent validation rules
- Chat Trigger validation constraints
- Basic LLM Chain validation requirements
- AI Tool sub-node validation (HTTP Request, Code, Vector Store, Workflow, Calculator)
- End-to-end workflow validation
- Multi-error detection
- Node type normalization (bug fix validation)
## Test Files
### 1. `helpers.ts`
Utility functions for creating AI workflow components:
- `createAIAgentNode()` - AI Agent with configurable options
- `createChatTriggerNode()` - Chat Trigger with streaming modes
- `createBasicLLMChainNode()` - Basic LLM Chain
- `createLanguageModelNode()` - OpenAI/Anthropic models
- `createHTTPRequestToolNode()` - HTTP Request Tool
- `createCodeToolNode()` - Code Tool
- `createVectorStoreToolNode()` - Vector Store Tool
- `createWorkflowToolNode()` - Workflow Tool
- `createCalculatorToolNode()` - Calculator Tool
- `createMemoryNode()` - Buffer Window Memory
- `createRespondNode()` - Respond to Webhook
- `createAIConnection()` - AI connection helper (reversed for langchain)
- `createMainConnection()` - Standard n8n connection
- `mergeConnections()` - Merge multiple connection objects
- `createAIWorkflow()` - Complete workflow builder
### 2. `ai-agent-validation.test.ts` (7 tests)
Tests AI Agent validation:
- β
Detects missing language model (MISSING_LANGUAGE_MODEL error)
- β
Validates AI Agent with language model connected
- β
Detects tool connections correctly (no false warnings)
- β
Validates streaming mode constraints (Chat Trigger)
- β
Validates AI Agent own streamResponse setting
- β
Detects multiple memory connections (error)
- β
Validates complete AI workflow (all components)
### 3. `chat-trigger-validation.test.ts` (5 tests)
Tests Chat Trigger validation:
- β
Detects streaming to non-AI-Agent (STREAMING_WRONG_TARGET error)
- β
Detects missing connections (MISSING_CONNECTIONS error)
- β
Validates valid streaming setup
- β
Validates lastNode mode with AI Agent
- β
Detects streaming agent with output connection
### 4. `llm-chain-validation.test.ts` (6 tests)
Tests Basic LLM Chain validation:
- β
Detects missing language model (MISSING_LANGUAGE_MODEL error)
- β
Detects missing prompt text (MISSING_PROMPT_TEXT error)
- β
Validates complete LLM Chain
- β
Validates LLM Chain with memory
- β
Detects multiple language models (error - no fallback support)
- β
Detects tools connection (TOOLS_NOT_SUPPORTED error)
### 5. `ai-tool-validation.test.ts` (9 tests)
Tests AI Tool validation:
**HTTP Request Tool:**
- β
Detects missing toolDescription (MISSING_TOOL_DESCRIPTION)
- β
Detects missing URL (MISSING_URL)
- β
Validates valid HTTP Request Tool
**Code Tool:**
- β
Detects missing code (MISSING_CODE)
- β
Validates valid Code Tool
**Vector Store Tool:**
- β
Detects missing toolDescription
- β
Validates valid Vector Store Tool
**Workflow Tool:**
- β
Detects missing workflowId (MISSING_WORKFLOW_ID)
- β
Validates valid Workflow Tool
**Calculator Tool:**
- β
Validates Calculator Tool (no configuration needed)
### 6. `e2e-validation.test.ts` (5 tests)
End-to-end validation tests:
- β
Validates and creates complex AI workflow (7 nodes, all components)
- β
Detects multiple validation errors (5+ errors in one workflow)
- β
Validates streaming workflow without main output
- β
Validates non-streaming workflow with main output
- β
Tests node type normalization (v2.17.0 bug fix validation)
## Running Tests
### Run All AI Validation Tests
```bash
npm test -- tests/integration/ai-validation --run
```
### Run Specific Test Suite
```bash
npm test -- tests/integration/ai-validation/ai-agent-validation.test.ts --run
npm test -- tests/integration/ai-validation/chat-trigger-validation.test.ts --run
npm test -- tests/integration/ai-validation/llm-chain-validation.test.ts --run
npm test -- tests/integration/ai-validation/ai-tool-validation.test.ts --run
npm test -- tests/integration/ai-validation/e2e-validation.test.ts --run
```
### Prerequisites
1. **n8n Instance**: Real n8n instance required (not mocked)
2. **Environment Variables**:
```env
N8N_API_URL=http://localhost:5678
N8N_API_KEY=your-api-key
TEST_CLEANUP=true # Auto-cleanup test workflows (default: true)
```
3. **Build**: Run `npm run build` before testing
## Test Infrastructure
### Cleanup
- All tests use `TestContext` for automatic workflow cleanup
- Workflows are tagged with `mcp-integration-test` and `ai-validation`
- Cleanup runs in `afterEach` hooks
- Orphaned workflow cleanup runs in `afterAll` (non-CI only)
### Workflow Naming
- All test workflows use timestamps: `[MCP-TEST] Description 1696723200000`
- Prevents name collisions
- Easy identification in n8n UI
### Connection Patterns
- **Main connections**: Standard n8n flow (A β B)
- **AI connections**: Reversed flow (Language Model β AI Agent)
- Uses helper functions to ensure correct connection structure
## Key Validation Checks
### AI Agent
- Language model connections (1 or 2 for fallback)
- Output parser configuration
- Prompt type validation (auto vs define)
- System message recommendations
- Streaming mode constraints (CRITICAL)
- Memory connections (0-1 max)
- Tool connections
- maxIterations validation
### Chat Trigger
- responseMode validation (streaming vs lastNode)
- Streaming requires AI Agent target
- AI Agent in streaming mode: NO main output allowed
### Basic LLM Chain
- Exactly 1 language model (no fallback)
- Memory connections (0-1 max)
- No tools support (error if connected)
- Prompt configuration validation
### AI Tools
- HTTP Request Tool: requires toolDescription + URL
- Code Tool: requires jsCode
- Vector Store Tool: requires toolDescription + vector store connection
- Workflow Tool: requires workflowId
- Calculator Tool: no configuration required
## Validation Error Codes
Tests verify these error codes are correctly detected:
- `MISSING_LANGUAGE_MODEL` - No language model connected
- `MISSING_TOOL_DESCRIPTION` - Tool missing description
- `MISSING_URL` - HTTP tool missing URL
- `MISSING_CODE` - Code tool missing code
- `MISSING_WORKFLOW_ID` - Workflow tool missing ID
- `MISSING_PROMPT_TEXT` - Prompt type=define but no text
- `MISSING_CONNECTIONS` - Chat Trigger has no output
- `STREAMING_WITH_MAIN_OUTPUT` - AI Agent in streaming mode with main output
- `STREAMING_WRONG_TARGET` - Chat Trigger streaming to non-AI-Agent
- `STREAMING_AGENT_HAS_OUTPUT` - Streaming agent has output connection
- `MULTIPLE_LANGUAGE_MODELS` - LLM Chain with multiple models
- `MULTIPLE_MEMORY_CONNECTIONS` - Multiple memory connected
- `TOOLS_NOT_SUPPORTED` - Basic LLM Chain with tools
- `TOO_MANY_LANGUAGE_MODELS` - AI Agent with 3+ models
- `FALLBACK_MISSING_SECOND_MODEL` - needsFallback=true but 1 model
- `MULTIPLE_OUTPUT_PARSERS` - Multiple output parsers
## Bug Fix Validation
### v2.17.0 Node Type Normalization
Test 5 in `e2e-validation.test.ts` validates the fix for node type normalization:
- Creates AI Agent + OpenAI Model + HTTP Request Tool
- Connects tool via ai_tool connection
- Verifies NO false "no tools connected" warning
- Validates workflow is valid
This test would have caught the bug where:
```typescript
// BUG: Incorrect comparison
sourceNode.type === 'nodes-langchain.chatTrigger' // β Never matches
// FIX: Use normalizer
NodeTypeNormalizer.normalizeToFullForm(sourceNode.type) === 'nodes-langchain.chatTrigger' // β
Works
```
## Success Criteria
All tests should:
- β
Create workflows in real n8n
- β
Validate using actual MCP tools (handleValidateWorkflow)
- β
Verify validation results match expected outcomes
- β
Clean up after themselves (no orphaned workflows)
- β
Run in under 30 seconds each
- β
Be deterministic (no flakiness)
## Test Coverage
Total: **32 tests** covering:
- **7 AI Agent tests** - Complete AI Agent validation logic
- **5 Chat Trigger tests** - Streaming mode and connection validation
- **6 Basic LLM Chain tests** - LLM Chain constraints and requirements
- **9 AI Tool tests** - All AI tool sub-node types
- **5 E2E tests** - Complex workflows and multi-error detection
## Coverage Summary
### Validation Features Tested
- β
Language model connections (required, fallback)
- β
Output parser configuration
- β
Prompt type validation
- β
System message checks
- β
Streaming mode constraints
- β
Memory connections (single)
- β
Tool connections
- β
maxIterations validation
- β
Chat Trigger modes (streaming, lastNode)
- β
Tool description requirements
- β
Tool-specific parameters (URL, code, workflowId)
- β
Multi-error detection
- β
Node type normalization
- β
Connection validation (missing, invalid)
### Edge Cases Tested
- β
Empty/missing required fields
- β
Invalid configurations
- β
Multiple connections (when not allowed)
- β
Streaming with main output (forbidden)
- β
Tool connections to non-agent nodes
- β
Fallback model configuration
- β
Complex workflows with all components
## Recommendations
### Additional Tests (Future)
1. **Performance tests** - Validate large AI workflows (20+ nodes)
2. **Credential validation** - Test with invalid/missing credentials
3. **Expression validation** - Test n8n expressions in AI node parameters
4. **Cross-version tests** - Test different node typeVersions
5. **Concurrent validation** - Test multiple workflows in parallel
### Test Maintenance
- Update tests when new AI nodes are added
- Add tests for new validation rules
- Keep helpers.ts updated with new node types
- Verify error codes match specification
## Notes
- Tests create real workflows in n8n (not mocked)
- Each test is independent (no shared state)
- Workflows are automatically cleaned up
- Tests use actual MCP validation handlers
- All AI connection types are tested
- Streaming mode validation is comprehensive
- Node type normalization is validated