Provides specialized scanning for SQL queries intended for PostgreSQL databases to detect and block injection attacks and dangerous operations.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Shrike Security MCP Serverscan this prompt for injection and redact any PII"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
shrike-mcp
MCP (Model Context Protocol) server for Shrike Security — protect AI agents from prompt injection, jailbreaks, SQL injection, data exfiltration, and malicious file operations.
Installation
Or use with npx:
Quick Start
With Claude Desktop
Add to your Claude Desktop configuration (~/.claude/claude_desktop_config.json):
Without an API key, scans run on the free tier (regex-only layers L1–L4). With an API key, you get the full 9-layer scan pipeline including LLM semantic analysis.
Environment Variables
Variable | Description | Default |
| API key for authenticated scans (enables L7/L8 LLM layers) | none (free tier) |
| URL of the Shrike backend API |
|
| Timeout for scan requests (ms) |
|
| Max requests per minute per customer |
|
| Enable debug logging ( |
|
Available Tools
scan_prompt
Scans user prompts for prompt injection, jailbreak attempts, and malicious content. Supports PII redaction with token-based rehydration.
Parameters:
Parameter | Type | Required | Description |
| string | Yes | The prompt text to scan |
| string | No | Conversation history for context-aware scanning |
| boolean | No | When true, PII is redacted before scanning. Response includes tokens for rehydration. |
Example:
scan_response
Scans LLM-generated responses before showing them to users. Detects system prompt leaks, unexpected PII, toxic language, and topic drift. Rehydrates PII tokens when provided.
Parameters:
Parameter | Type | Required | Description |
| string | Yes | The LLM-generated response to scan |
| string | No | The original prompt (enables PII diff and topic mismatch detection) |
| array | No | PII token map from |
Example:
scan_sql_query
Scans SQL queries for injection attacks and dangerous operations before execution.
Parameters:
Parameter | Type | Required | Description |
| string | Yes | The SQL query to scan |
| string | No | Target database name for context |
| boolean | No | Allow DROP/TRUNCATE for migrations (default: false) |
Example:
scan_file_write
Validates file paths and content before write operations. Checks for path traversal, secrets in content, and sensitive file access.
Parameters:
Parameter | Type | Required | Description |
| string | Yes | The target file path |
| string | Yes | The content to write |
| string | No | Write mode: |
Example:
scan_web_search
Scans web search queries for PII exposure, data exfiltration patterns, and blocked domains.
Parameters:
Parameter | Type | Required | Description |
| string | Yes | The search query to scan |
| string[] | No | List of target domains to validate |
Example:
report_bypass
Reports content that bypassed security checks to improve detection via ThreatSense pattern learning.
Parameters:
Parameter | Type | Required | Description |
| string | No | The prompt that bypassed detection |
| string | No | File path for file_write bypasses |
| string | No | File content that should have been blocked |
| string | No | SQL query that bypassed injection detection |
| string | No | Web search query with undetected PII |
| string | No | Type of mutation used (e.g., |
| string | No | Threat category (auto-inferred if not provided) |
| string | No | Additional notes about the bypass |
get_threat_intel
Retrieves current threat intelligence including active detection patterns, threat categories, and statistics.
Parameters:
Parameter | Type | Required | Description |
| string | No | Filter by threat category |
| number | No | Max patterns to return (default: 50) |
Response Format
All scan tools return a sanitized response:
Safe results return:
Security Model
This MCP server implements a fail-closed security model:
Network timeouts result in BLOCK (not allow)
Backend errors result in BLOCK (not allow)
Unknown content types result in BLOCK (not allow)
This prevents bypass attacks via service disruption.
Known Limitations
Free tier is regex-only — No LLM semantic analysis without API key
No offline mode — Requires network access to Shrike backend
Response Intelligence requires original prompt —
original_promptparam is optional but recommended for full L8 analysisRate limits are MCP-side only — Backend has separate per-tier limits
stdio transport only — No HTTP server mode; requires MCP-compatible host
Self-Hosting
To run your own Shrike backend:
Then point the MCP server to your local backend:
License
Apache License 2.0 — See LICENSE for details.
Support
GitHub Issues: https://github.com/Shrike-Security/shrike-mcp/issues
Email: support@shrikesecurity.com
Changelog
v1.0.0 (February 10, 2026)
Initial public release
7 MCP tools for AI agent security
9-layer detection pipeline
PII isolation with token rehydration
Response obfuscation for IP protection