WORKFLOW.mdā¢15.6 kB
# VulneraMCP Workflow Documentation
## Overview
VulneraMCP is an AI-powered bug bounty hunting platform that operates as a Model Context Protocol (MCP) server. It provides security testing tools, reconnaissance capabilities, and vulnerability detection through an MCP-compatible interface.
## Architecture Flow
### Visual Architecture Diagram
```mermaid
graph TB
Client[MCP Client<br/>Cursor/Claude Desktop]
Server[MCP Server<br/>src/index.ts]
Tools[Tool Handlers<br/>src/tools/*.ts]
Recon[Recon Tools<br/>recon.*]
Security[Security Tools<br/>security.*]
JS[JS Analysis<br/>js.*]
ZAP[ZAP Integration<br/>zap.*]
DB[Database Tools<br/>db.*]
Render[Render Tools<br/>render.*]
ExtTools[External Tools<br/>subfinder, httpx, amass]
ZAPTool[OWASP ZAP<br/>Scanner]
Browser[Puppeteer<br/>Browser]
Postgres[(PostgreSQL<br/>Findings & Results)]
Redis[(Redis<br/>Cache & Memory)]
Dashboard[Dashboard Server<br/>Express API]
Client -->|JSON-RPC 2.0| Server
Server --> Tools
Tools --> Recon
Tools --> Security
Tools --> JS
Tools --> ZAP
Tools --> DB
Tools --> Render
Recon --> ExtTools
Security --> ExtTools
ZAP --> ZAPTool
Render --> Browser
DB --> Postgres
Recon --> Redis
Security --> Postgres
ZAP --> Postgres
Dashboard --> Postgres
Dashboard -->|Web UI| BrowserUI[Web Browser]
```
### System Components
```
āāāāāāāāāāāāāāāāāāā
ā MCP Client ā (Cursor, Claude Desktop, etc.)
ā (AI Assistant) ā
āāāāāāāāāā¬āāāāāāāāā
ā JSON-RPC 2.0
ā (stdin/stdout)
ā¼
āāāāāāāāāāāāāāāāāāā
ā MCP Server ā (src/index.ts)
ā (Main Entry) ā
āāāāāāāāāā¬āāāāāāāāā
ā
āāāāŗ Tool Registration
ā āāā recon.*
ā āāā security.*
ā āāā js.*
ā āāā zap.*
ā āāā db.*
ā āāā ...
ā
ā¼
āāāāāāāāāāāāāāāāāāā
ā Tool Handler ā (src/tools/*.ts)
ā (Execution) ā
āāāāāāāāāā¬āāāāāāāāā
ā
āāāāŗ External Tools
ā āāā subfinder, httpx, amass
ā āāā OWASP ZAP
ā āāā Puppeteer (rendering)
ā
āāāāŗ Database
ā āāā PostgreSQL (findings, results)
ā āāā Redis (caching, working memory)
ā
āāāāŗ Dashboard
āāā Express API (dashboard-server.js)
```
## Detailed Workflow
### 1. Server Initialization
**File:** `src/index.ts`
**Steps:**
1. Create MCP Server instance
2. Register all tool modules:
- `registerReconTools()` - Reconnaissance tools
- `registerSecurityTools()` - Security testing
- `registerJsTools()` - JavaScript analysis
- `registerZAPTools()` - ZAP integration
- `registerDatabaseTools()` - Database operations
- `registerRenderTools()` - Screenshot/DOM extraction
- `registerTrainingTools()` - AI training data
- `registerCSRFTools()` - CSRF testing
- Optional: `registerBurpTools()`, `registerCaidoTools()`
3. Initialize connections:
- PostgreSQL connection pool
- Redis connection (optional)
4. Start MCP server (listens on stdin/stdout for JSON-RPC)
### 2. Tool Request Flow
**File:** `src/mcp/server.ts`
**Process Flow Diagram:**
```mermaid
sequenceDiagram
participant Client
participant Server as MCP Server
participant Tool as Tool Handler
participant Ext as External Tool/DB
Client->>Server: JSON-RPC Request
Note over Client,Server: {"method": "tools/call",<br/>"params": {"name": "recon.subfinder",<br/>"arguments": {...}}}
Server->>Server: Lookup tool by name
Server->>Tool: Execute handler
Tool->>Ext: Run command/query
Ext-->>Tool: Return result
Tool->>Tool: Format result
Tool-->>Server: Return ToolResult
Server->>Server: Format JSON-RPC response
Server-->>Client: JSON-RPC Response
Note over Server,Client: {"result": {"content": [...]}}
```
**Text Flow:**
```
Client Request
ā
ā¼
JSON-RPC 2.0 Message
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "recon.subfinder",
"arguments": { "domain": "example.com" }
}
}
ā
ā¼
Server.tool() lookup
ā
ā¼
Tool Handler Execution
ā
ā¼
Result Formatting
ā
ā¼
JSON-RPC Response
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [{ "type": "text", "text": "..." }]
}
}
```
### 3. Tool Categories & Workflows
#### A. Reconnaissance Workflow
**Tools:** `recon.subfinder`, `recon.httpx`, `recon.amass`, `recon.dns`, `recon.full`
**Flow Diagram:**
```mermaid
flowchart LR
Start[recon.full] --> Subfinder[subfinder]
Subfinder --> Parse[Parse Subdomains]
Parse --> Redis[(Save to Redis)]
Parse --> Httpx[httpx Check]
Httpx --> Live[Filter Live Hosts]
Live --> Amass[amass Enum]
Amass --> Aggregate[Aggregate Results]
Aggregate --> DB[(Save to PostgreSQL)]
DB --> End[Return Results]
```
**Detailed Flow:**
```
1. recon.subfinder
āāāŗ Execute subfinder command
āāāŗ Parse subdomains
āāāŗ Save to Redis (working memory)
āāāŗ Save test result to PostgreSQL
2. recon.httpx
āāāŗ Check which hosts are live
āāāŗ Get status codes
āāāŗ Filter active endpoints
3. recon.full
āāāŗ Run subfinder ā httpx ā amass
āāāŗ Aggregate results
āāāŗ Return comprehensive recon data
```
**Data Storage:**
- Redis: Temporary working memory (TTL: 3600s)
- PostgreSQL: Test results table
#### B. Security Testing Workflow
**Tools:** `security.test_xss`, `security.test_sqli`, `security.test_idor`, `security.test_csrf`, etc.
**Flow:**
```
1. security.test_xss
āāāŗ Send XSS payloads
āāāŗ Analyze response
āāāŗ Detect reflected/executed payloads
āāāŗ Save finding if vulnerable
2. security.test_sqli
āāāŗ Test SQL injection payloads
āāāŗ Detect error messages/time delays
āāāŗ Optionally use sqlmap
āāāŗ Save finding if vulnerable
3. security.test_csrf
āāāŗ Analyze CSRF protection
āāāŗ Test bypass techniques
āāāŗ Generate PoC HTML
āāāŗ Save finding if vulnerable
```
**Data Storage:**
- PostgreSQL: `findings` table (if vulnerability found)
- PostgreSQL: `test_results` table (all test attempts)
#### C. JavaScript Analysis Workflow
**Tools:** `js.download`, `js.beautify`, `js.find_endpoints`, `js.extract_secrets`, `js.analyze`
**Flow:**
```
1. js.download
āāāŗ Fetch JavaScript file from URL
āāāŗ Return raw source code
2. js.beautify
āāāŗ Format minified JavaScript
āāāŗ Make code readable
3. js.find_endpoints
āāāŗ Extract API endpoints, URLs, paths
āāāŗ Use regex patterns
āāāŗ Return discovered endpoints
4. js.extract_secrets
āāāŗ Heuristic secret detection
āāāŗ Find API keys, tokens, secrets
āāāŗ Return potential secrets
5. js.analyze (combined)
āāāŗ Download ā Beautify ā Extract endpoints & secrets
āāāŗ Return comprehensive analysis
```
**Data Storage:**
- Results returned to client (not stored by default)
#### D. ZAP Integration Workflow
**Tools:** `zap.start_spider`, `zap.start_active_scan`, `zap.get_alerts`, `zap.proxy_process`
**Flow:**
```
1. zap.start_spider
āāāŗ Start ZAP spider scan
āāāŗ Monitor progress
āāāŗ Return discovered URLs
2. zap.start_active_scan
āāāŗ Start active vulnerability scan
āāāŗ Monitor progress
āāāŗ Return scan status
3. zap.get_alerts
āāāŗ Query ZAP for security alerts
āāāŗ Filter by risk level/URL
āāāŗ Return vulnerability findings
4. zap.proxy_process
āāāŗ Send request through ZAP proxy
āāāŗ Analyze with MCP proxy layer
āāāŗ Correlate ZAP alerts + custom findings
āāāŗ Return enhanced findings
```
**Data Storage:**
- ZAP: Stores alerts internally
- PostgreSQL: Findings saved via `db.save_finding`
#### E. Database Workflow
**Tools:** `db.save_finding`, `db.get_findings`, `db.init`, `db.get_statistics`
**Flow:**
```
1. db.save_finding
āāāŗ Insert into findings table
āāāŗ Store: target, type, severity, description, payload, response, score
āāāŗ Return finding ID
2. db.get_findings
āāāŗ Query findings table
āāāŗ Filter by target, severity, type
āāāŗ Return paginated results
3. db.get_statistics
āāāŗ Aggregate findings data
āāāŗ Count by severity
āāāŗ Calculate success rates
āāāŗ Return statistics
```
**Database Schema:**
- `findings` table: Vulnerability findings
- `test_results` table: All test attempts
- `training_data` table: AI training patterns
#### F. Rendering Workflow
**Tools:** `render.screenshot`, `render.extract_dom`, `render.extract_forms`, `render.execute_js`
**Flow:**
```
1. render.screenshot
āāāŗ Launch Puppeteer browser
āāāŗ Navigate to URL
āāāŗ Capture screenshot
āāāŗ Return image data
2. render.extract_dom
āāāŗ Load page with Puppeteer
āāāŗ Extract DOM structure
āāāŗ Return accessibility tree
3. render.extract_forms
āāāŗ Find all forms on page
āāāŗ Extract form fields, actions, methods
āāāŗ Return form data
4. render.execute_js
āāāŗ Execute JavaScript in page context
āāāŗ Return execution results
```
**Data Storage:**
- Screenshots saved to filesystem (optional)
- Results returned to client
### 4. Dashboard Workflow
**File:** `dashboard-server.js`
**Flow:**
```
Web Browser
ā
ā¼
Express Server (Port 3000)
ā
āāāāŗ GET /api/statistics
ā āāāŗ Query PostgreSQL
ā āāāŗ Return aggregated stats
ā
āāāāŗ GET /api/findings
ā āāāŗ Query findings table
ā āāāŗ Filter & paginate
ā āāāŗ Return JSON
ā
āāāāŗ GET /api/test-results
ā āāāŗ Query test_results table
ā āāāŗ Return test history
ā
āāāāŗ GET /
āāāŗ Serve dashboard HTML
```
**Data Flow:**
- Dashboard reads from PostgreSQL
- Real-time updates via API polling
- No direct MCP server connection
### 5. Complete Bug Bounty Workflow Example
**Typical Workflow:**
```mermaid
sequenceDiagram
participant Client as MCP Client
participant Server as MCP Server
participant Recon as Recon Tools
participant JS as JS Analysis
participant Security as Security Tools
participant ZAP as ZAP Integration
participant DB as Database
participant Dashboard as Dashboard
Client->>Server: recon.full(domain)
Server->>Recon: Execute subfinder
Recon->>Recon: Execute httpx
Recon->>DB: Save test results
Server-->>Client: Return subdomains
Client->>Server: js.analyze(url)
Server->>JS: Download & analyze
JS-->>Client: Return endpoints & secrets
Client->>Server: security.test_xss(url)
Server->>Security: Test XSS payloads
Security->>DB: Save finding (if vulnerable)
Server-->>Client: Return test results
Client->>Server: zap.start_spider(url)
Server->>ZAP: Start spider scan
ZAP-->>Server: Return discovered URLs
Client->>Server: zap.get_alerts()
Server->>ZAP: Query alerts
ZAP-->>Server: Return vulnerabilities
Server->>DB: Save findings
Server-->>Client: Return alerts
Client->>Dashboard: View findings
Dashboard->>DB: Query findings
DB-->>Dashboard: Return data
Dashboard-->>Client: Display results
```
**Step-by-Step Flow:**
```
1. RECONNAISSANCE
āāāŗ recon.full domain: example.com
āāāŗ Discover subdomains (subfinder)
āāāŗ Check live hosts (httpx)
āāāŗ Store results in Redis
2. DISCOVERY
āāāŗ js.analyze url: https://example.com/app.js
āāāŗ Download JavaScript
āāāŗ Extract endpoints
āāāŗ Find potential secrets
3. TESTING
āāāŗ security.test_xss url: https://example.com/search?q=test
āāāŗ Send XSS payloads
āāāŗ Analyze response
āāāŗ db.save_finding (if vulnerable)
4. SCANNING
āāāŗ zap.start_spider url: https://example.com
āāāŗ Crawl website
āāāŗ zap.start_active_scan
āāāŗ zap.get_alerts
āāāŗ db.save_finding (for each alert)
5. ANALYSIS
āāāŗ db.get_findings target: example.com
āāāŗ Review all findings
āāāŗ Dashboard: http://localhost:3000
```
### 6. Data Persistence
**PostgreSQL Tables:**
1. **findings**
- Stores discovered vulnerabilities
- Fields: id, target, type, severity, description, payload, response, score, timestamp
2. **test_results**
- Stores all test attempts
- Fields: id, target, test_type, success, score, result_data, error_message, timestamp
3. **training_data**
- Stores AI training patterns
- Fields: id, source, vulnerability_type, target_pattern, payload_pattern, success_pattern, score
**Redis (Optional):**
- Working memory: Temporary data (TTL-based)
- Caching: Frequently accessed data
### 7. Error Handling
**Flow:**
```
Tool Execution
ā
āāāŗ Success
ā āāāŗ formatToolResult(true, data)
ā
āāāŗ Error
āāāŗ Catch exception
āāāŗ formatToolResult(false, null, error.message)
āāāŗ Log error (console.error)
```
**Error Types:**
- Tool not found: JSON-RPC error code -32601
- Internal error: JSON-RPC error code -32603
- Validation error: Tool-specific error messages
### 8. Integration Points
**External Tools:**
- **subfinder**: Subdomain discovery
- **httpx**: HTTP probing
- **amass**: DNS enumeration
- **OWASP ZAP**: Vulnerability scanning
- **Puppeteer**: Browser automation
- **PostgreSQL**: Data persistence
- **Redis**: Caching (optional)
**Optional Integrations:**
- **Burp Suite**: Traffic analysis (if available)
- **Caido**: Traffic analysis (if available)
## Key Design Patterns
1. **Tool Registration Pattern**: All tools register themselves with the server
2. **Result Formatting**: Consistent `ToolResult` format across all tools
3. **Non-blocking Initialization**: Database/Redis connections don't block server startup
4. **Optional Dependencies**: Server works without optional tools (Burp, Caido, Redis)
5. **Error Resilience**: Tools handle errors gracefully without crashing server
## Performance Considerations
- **Connection Pooling**: PostgreSQL uses connection pooling
- **Redis Caching**: Frequently accessed data cached
- **Async/Await**: All I/O operations are asynchronous
- **Resource Cleanup**: Browser instances closed on shutdown
## Security Considerations
- **Input Validation**: All tool inputs validated via JSON schemas
- **Command Injection Prevention**: External commands use safe execution
- **Rate Limiting**: Tools respect rate limits (via hunting/rate-limiter.js)
- **Authorization**: Users must have proper authorization before testing
---
**Last Updated:** 2024-11-28