---
name: 'AI Executor Integration'
description: 'Tests AI executor in prompt and full modes, validates AI configuration and prompt generation for all tools'
on:
pull_request:
paths:
- 'src/utils/ai-executor.ts'
- 'src/config/ai-config.ts'
- 'src/prompts/**'
- 'src/tools/**'
schedule:
- cron: '0 6 * * *'
workflow_dispatch:
permissions:
issues: read
pull-requests: read
safe-outputs:
create-issue:
title-prefix: '[ai-executor]'
add-comment:
tools:
bash: true
github:
toolsets: [issues, pull_requests]
---
# AI Executor Integration
You validate the AI execution subsystem of the MCP server, testing both prompt mode (always available) and full mode (when OpenRouter API key is configured).
## Context
The mcp-adr-analysis-server operates in two modes:
1. **Prompt Mode** (default, no API key): Returns structured prompts that the calling LLM can execute
2. **Full Mode** (with `OPENROUTER_API_KEY`): Sends prompts to OpenRouter.ai and returns AI-generated results
Key files:
- `src/utils/ai-executor.ts` — Core AI execution logic with caching and retry
- `src/config/ai-config.ts` — AI configuration management (model, temperature, tokens)
- `src/prompts/` — Prompt templates for each tool
## Validation Steps
### Step 0: Install Dependencies and Build
Install build tools required for native module compilation (e.g., tree-sitter), then install npm dependencies with error-tolerant fallbacks, and build the project.
```bash
# Install build tools for native modules (requires sudo)
sudo apt-get update -qq && sudo apt-get install -y build-essential python3 || echo "⚠️ Could not install build tools (may already be present)"
# Install npm dependencies with fallback
if ! npm ci; then
echo "⚠️ npm ci failed, falling back to npm install..."
npm install
fi
# Rebuild tree-sitter native bindings (non-blocking)
echo "🔨 Rebuilding tree-sitter native bindings..."
if ! npm rebuild tree-sitter; then
echo "⚠️ Tree-sitter rebuild failed, but continuing (tests handle graceful fallback)"
fi
# Build the project (required for dist/ artifacts)
npm run build
```
If `npm` is not available or both install commands fail, Steps 1-5 will not be able to run. Report the dependency installation failure as the root cause.
### Step 1: Test Prompt Mode
This must always work, regardless of API key availability:
```bash
EXECUTION_MODE=prompt NODE_ENV=test npm test -- tests/utils/ai-executor.test.ts --verbose
```
Verify that:
- The AI executor initializes without errors
- All tools can generate prompts without an API key
- Prompt output follows the expected structure
### Step 2: Validate AI Configuration
```bash
node -e "
import { getAIConfig } from './dist/src/config/ai-config.js';
const config = getAIConfig();
console.log('AI Config loaded successfully');
console.log('Model:', config.model);
console.log('Execution mode:', config.executionMode);
if (!config.model || !config.executionMode) {
console.error('Invalid AI configuration');
process.exit(1);
}
console.log('AI configuration valid');
"
```
### Step 3: Test Prompt Generation
Run prompt-related tests to verify all tools can generate valid prompts:
```bash
npm test -- --testPathPattern="prompt.*test" --verbose
```
### Step 4: Test Full Mode (if API key available)
If the `OPENROUTER_API_KEY` secret is available, test full mode:
```bash
EXECUTION_MODE=full OPENROUTER_API_KEY=$OPENROUTER_API_KEY AI_MODEL=anthropic/claude-3-haiku NODE_ENV=test npm test -- tests/utils/ai-executor.test.ts --verbose
```
If the API key is not available, skip this step and note it in the report.
### Step 5: Verify Integration Points
Check that the AI executor properly integrates with:
- The caching system (results should be cacheable)
- The knowledge graph (AI results should be recordable)
- Error handling (API failures should fall back to prompt mode gracefully)
## On Failure
If any step fails:
1. **For PR events**: Add a comment to the PR with:
- Which test(s) failed
- Whether the failure is in prompt mode (critical) or full mode (may be API-related)
- Suggested fix
2. **For schedule/dispatch events**: Create an issue:
**Title**: `[ai-executor] {failure description}`
**Body**:
```markdown
## AI Executor Integration Failure
**Trigger**: {schedule/dispatch}
**Date**: {date}
**API Key Available**: {yes/no}
### Failure Details
{which step failed and full error output}
### Diagnosis
{analysis of whether this is a code issue, API issue, or configuration issue}
### Recommended Fix
1. {step 1}
2. {step 2}
---
_Generated by AI Executor Integration agentic workflow_
```
## On Success
- **For PR events**: Add a brief comment confirming AI executor tests passed (note which modes were tested)
- **For schedule/dispatch events**: No issue needed
---
_Replaces: .github/agents/ai-executor-integration.yml_