Skip to main content
Glama
Shrike-Security

Shrike Security MCP Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
MCP_DEBUGNoEnable debug logging (true/false).false
SHRIKE_API_KEYNoAPI key for authenticated scans (enables L7/L8 LLM layers). Without an API key, scans run on the free tier (regex-only layers L1–L4).
SHRIKE_BACKEND_URLNoURL of the Shrike backend API.https://api.shrikesecurity.com/agent
MCP_SCAN_TIMEOUT_MSNoTimeout for scan requests (ms).15000
MCP_RATE_LIMIT_PER_MINUTENoMax requests per minute per customer.100

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
scan_prompt

Scans text for security threats including PII, prompt injection, jailbreak attempts, and toxicity.

Returns a security assessment with:

  • blocked: true/false - whether the content was blocked

  • threat_type: category of threat detected (prompt_injection, jailbreak, pii_exposure, etc.)

  • severity: critical/high/medium/low

  • confidence: high/medium/low

  • guidance: actionable explanation of what was detected

  • request_id: unique identifier for this scan

If blocked=false, only request_id is returned (content is safe).

When redact_pii=true, PII is redacted client-side before scanning. The response includes:

  • pii_redaction.redacted_content: text with PII replaced by tokens like [EMAIL_1]

  • pii_redaction.tokens: array of {token, original, type} for rehydrating LLM responses PII never leaves the MCP process when redaction is enabled.

report_bypass

Reports content that bypassed security checks to help improve detection.

Supports multiple bypass types:

  • Prompt bypasses: Use 'prompt' field

  • File write bypasses: Use 'filePath' and/or 'fileContent' fields

  • SQL bypasses: Use 'sqlQuery' field

  • Web search bypasses: Use 'searchQuery' field

The bypass will be analyzed and may generate a new detection pattern.

get_threat_intel

Retrieves current threat intelligence including active detection patterns, threat categories, and statistics.

scan_web_search

Scans a web search query before execution for security issues.

Checks for:

  • PII in search queries (SSN, credit cards, API keys, private keys)

  • Data exfiltration patterns (searching for leaked credentials, Google dorks)

  • Blocked/suspicious domains (paste sites, suspicious TLDs)

Returns:

  • blocked: true/false

  • threat_type: blocked_domain, pii_exposure, etc.

  • severity: critical/high/medium/low

  • confidence: high/medium/low

  • guidance: actionable explanation

  • request_id: unique identifier

scan_sql_query

Scans a SQL query before execution for security threats.

Checks for:

  • SQL injection patterns (UNION, stacked queries, tautologies, blind injection)

  • Destructive operations (DROP, TRUNCATE, DELETE without WHERE)

  • Privilege escalation (GRANT, CREATE USER)

  • PII extraction (queries on password/SSN/credit card columns)

Set allowDestructive=true to permit DROP/TRUNCATE for migrations.

Returns:

  • blocked: true/false

  • threat_type: sql_injection, etc.

  • severity: critical/high/medium/low

  • confidence: high/medium/low

  • guidance: actionable explanation

  • request_id: unique identifier

scan_file_write

Scans a file write operation before execution for security threats.

Checks:

  • Sensitive file paths (.env, credentials, SSH keys, certificates)

  • Path traversal attacks (../, system directories)

  • PII in content (SSN, credit cards, emails)

  • Secrets in content (API keys, passwords, tokens)

  • Malicious code patterns (reverse shells, fork bombs)

Returns:

  • blocked: true/false

  • threat_type: path_traversal, secrets_exposure, etc.

  • severity: critical/high/medium/low

  • confidence: high/medium/low

  • guidance: actionable explanation

  • request_id: unique identifier

scan_response

Scans an LLM-generated response before showing it to the user.

Detects:

  • System prompt leaks (LLM revealing its instructions)

  • Unexpected PII in output (PII not present in the original prompt)

  • Toxic or hostile language in generated content

  • Topic drift (response diverges from prompt intent)

  • Policy violations in generated content

Provide the original_prompt for best results — it enables PII diff analysis and topic mismatch detection.

When pii_tokens is provided (from scan_prompt with redact_pii=true), the response is rehydrated after scanning. Tokens like [EMAIL_1] are replaced with the original values. The rehydrated text is returned as rehydrated_response.

Returns:

  • blocked: true/false

  • threat_type: category of threat detected

  • severity/confidence/guidance: security assessment details

  • rehydrated_response: (when pii_tokens provided and response is safe) text with PII restored

  • request_id: unique identifier

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Shrike-Security/shrike-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server