Skip to main content
Glama

scan_text

Analyze text for prompt injection, jailbreak attempts, data exfiltration, and social engineering threats using 42+ detection patterns to identify AI security risks.

Instructions

Scan text for prompt injection and security threats.

Analyzes the provided text using ClawGuard Shield's 42+ detection patterns to identify prompt injection attacks, jailbreak attempts, data exfiltration, social engineering, and other AI security threats.

Returns a scan result with:

  • is_clean: whether the text is safe

  • risk_score: threat level from 0 (safe) to 10 (critical)

  • severity: NONE, LOW, MEDIUM, HIGH, or CRITICAL

  • findings: list of detected threats with pattern names and descriptions

  • scan_id: unique identifier for this scan

Args: text: The text to scan for security threats. source: Optional source identifier for tracking (default: "mcp").

Returns: Scan result with clean/dirty status, risk score, and findings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes
sourceNomcp

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (analyzes text using 42+ detection patterns), what threats it detects, and the structure of the return value. It doesn't mention rate limits, authentication requirements, or performance characteristics, but provides substantial operational context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and economical. It opens with the core purpose, provides implementation details in the second paragraph, documents the return structure clearly, and ends with parameter explanations. Every sentence earns its place with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, 0% schema description coverage, but with an output schema present, the description provides excellent completeness. It explains the tool's purpose, detection methodology, return value structure, and parameter semantics. The output schema handles return value documentation, allowing the description to focus on operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the schema's lack of parameter documentation. It clearly explains both parameters: 'text' as 'The text to scan for security threats' and 'source' as 'Optional source identifier for tracking (default: "mcp")'. This adds meaningful semantic context beyond the bare schema, though it doesn't elaborate on source format constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Scan text for prompt injection and security threats'), the resource being acted upon ('text'), and distinguishes it from sibling tools like scan_batch (which handles batch processing) and get_patterns (which retrieves detection patterns). The verb 'scan' combined with the detailed threat detection scope makes the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: when scanning individual text inputs for security threats. It implicitly distinguishes from scan_batch (for batch processing) and get_patterns (for pattern retrieval), but doesn't explicitly state when NOT to use it or name alternatives. The guidance is sufficient but not exhaustive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/joergmichno/clawguard-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server