ShieldAPI MCP
Server Details
security tools for AI agents: URL safety scanning, prompt injection detection (200+ patterns), email/password breach checks via HIBP, domain & IP reputation analysis, and AI skill supply chain scanning. Free tier (3 calls/day) or pay-per-request with USDC micropayments via x402.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 9 of 9 tools scored.
Each tool has a clearly distinct purpose targeting specific security checks: domain, email, IP, password, password range, prompt injection, URL, full scan, and skill scanning. There is no overlap in functionality, making it easy for an agent to select the correct tool based on the input type and desired check.
All tools follow a consistent verb_noun pattern with the prefix 'shieldapi.check_' or 'shieldapi.scan_' (e.g., check_domain, check_email, scan_skill). The naming is uniform and predictable, using snake_case throughout without any deviations or mixed conventions.
With 9 tools, the server is well-scoped for a security API, covering a broad range of checks (e.g., reputation, breach detection, threat analysis) without being overwhelming. Each tool serves a unique and necessary function, making the count appropriate for the domain.
The tool set provides comprehensive coverage for security checks, including domain, email, IP, password, URL, and specialized scans for prompts and AI skills. A minor gap is the lack of update or delete operations, but this is reasonable for a read-only security assessment API, and agents can work around this by using the appropriate check tools as needed.
Available Tools
9 toolsshieldapi.check_domainARead-onlyIdempotentInspect
Check domain reputation: DNS records, blacklists (Spamhaus, SpamCop, SORBS), SPF/DMARC, SSL.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain name to check (e.g. example.com) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive safety profile. Description adds valuable behavioral context by naming specific third-party services checked (Spamhaus, SpamCop, SORBS) and record types (SPF/DMARC), clarifying scope beyond generic 'reputation check'. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently structured with action prefix followed by colon-separated specificity. Every term (Spamhaus, SPF, SSL) adds distinct value about inspection scope. No redundancy or unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter lookup tool with rich annotations (readOnly, idempotent, openWorld). Description compensates for missing output schema by detailing what aspects of the domain are inspected, though brief mention of return structure (e.g., 'returns reputation score and findings') would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'domain' parameter fully documented including example. Description does not add parameter-specific semantics beyond what the schema provides, meeting baseline expectations for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific action ('Check'), resource ('domain reputation'), and enumerates exact inspection points (DNS records, specific blacklists like Spamhaus/SpamCop/SORBS, SPF/DMARC, SSL). Clearly distinguishes from sibling tools like check_email or check_ip by focusing on domain-level infrastructure and email security validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through specificity of listed checks (e.g., implies use for email security verification), but lacks explicit when-to-use guidance or contrasts with siblings like check_url or check_ip. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.check_emailARead-onlyIdempotentInspect
Check if an email address has been exposed in known data breaches via HIBP.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Email address to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, non-destructive, idempotent). Description adds valuable behavioral context beyond annotations by disclosing the external dependency 'via HIBP' (Have I Been Pwned service). Does not mention rate limits or response structure, but external service disclosure is meaningful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 12 words with zero waste. Front-loaded with action verb 'Check'. Every token earns its place—no filler, no redundancy, no repetition of annotations or schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a single-parameter read-only tool with good annotations. Slight gap: no output schema exists and description doesn't hint at return format (boolean vs breach list vs risk score), though 'Check' implies a status result. Acceptable completeness given the tool's simplicity and clear safety annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'email' parameter fully described as 'Email address to check'. Description implies the email is the subject of breach checking but does not add syntax constraints, validation rules, or format details beyond the schema. Baseline 3 appropriate given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resource 'email address' and scope 'exposed in known data breaches'. It distinguishes from siblings (check_domain, check_ip, etc.) by explicitly targeting email addresses and specifies the external source 'via HIBP'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage through specificity ('email address' vs siblings handling domains/IPs/passwords), but lacks explicit when-to-use guidance or named alternatives. No mention of when NOT to use this versus full_scan or other siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.check_ipARead-onlyIdempotentInspect
Check IP reputation: blacklists, Tor exit node detection, reverse DNS.
| Name | Required | Description | Default |
|---|---|---|---|
| ip | Yes | IPv4 address to check (e.g. 8.8.8.8) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/safe properties, while description adds valuable specifics about what 'reputation' entails (three distinct data sources: blacklists, Tor detection, reverse DNS). Does not mention rate limits or auth requirements, but adds meaningful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with colon structure separating general action from specific capabilities. Zero redundant words. Front-loaded with verb and resource. Every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with strong safety annotations, description adequately covers functionality. Lacks explicit return value documentation (no output schema exists), but enumeration of check types provides implicit understanding of result content. Sufficient for tool selection and basic invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete description of 'ip' parameter ('IPv4 address to check'). Description mentions IP generally but does not augment parameter semantics beyond what schema already provides. Baseline score appropriate for full schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Check') + resource ('IP reputation') and enumerates exact detection capabilities (blacklists, Tor exit nodes, reverse DNS). Clearly distinguishes from siblings like check_domain or check_email by specifying 'IP' as the target resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists specific detection types performed (blacklists, Tor, rDNS) which implies appropriate use cases, but lacks explicit guidance on when to use this targeted check versus siblings like full_scan or scan_skill. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.check_passwordARead-onlyIdempotentInspect
Check if a password hash (SHA-1) has been exposed in known data breaches via HIBP.
| Name | Required | Description | Default |
|---|---|---|---|
| hash | Yes | SHA-1 hash of the password (40 hex chars) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly, non-destructive, idempotent) and external dependency (openWorldHint). The description adds valuable context by identifying 'HIBP' (Have I Been Pwned) as the external data source, clarifying the openWorldHint. However, it omits rate limits or privacy implications of sending full SHA-1 hashes externally.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 12 words that immediately states the action, input type, and data source. No redundancy or filler. Perfectly sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with robust annotations, the description adequately explains the operation and data source. Minor gap: lacks description of return values (breach count vs boolean) given the absence of an output schema, though this is somewhat inferable from 'Check if... exposed'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('SHA-1 hash of the password (40 hex chars)'), the schema carries the semantic burden. The description mentions 'SHA-1' and 'hash' but adds no additional format guidance, constraints, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb 'Check', resource 'password hash (SHA-1)', and scope 'exposed in known data breaches via HIBP'. It clearly distinguishes from siblings like check_domain, check_email, and check_ip by specifying the exact credential type and hashing algorithm.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the resource specificity implies usage (when you have a SHA-1 hash to check), it lacks explicit when-to-use guidance. Critically, it fails to distinguish from sibling 'check_password_range' or warn that this sends full hashes to an external API versus using the k-anonymity range method for privacy.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.check_password_rangeARead-onlyIdempotentInspect
Look up a SHA-1 hash prefix in the HIBP k-Anonymity database.
| Name | Required | Description | Default |
|---|---|---|---|
| prefix | Yes | First 5 characters of the SHA-1 password hash |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly, idempotent, and openWorld traits. The description adds valuable context by identifying the specific external data source (HIBP - Have I Been Pwned) and the privacy protocol (k-Anonymity) being used, which explains why only a prefix is required.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. It front-loads the action ('Look up') and immediately qualifies it with the specific technical context (SHA-1, HIBP, k-Anonymity), earning its place with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with rich annotations and full schema coverage, the description is nearly complete. It identifies the external service and protocol. It could improve by briefly noting what data is returned (hash suffixes and counts), but this is not critical given the tool's specialized nature and clear scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 'prefix' parameter as 'First 5 characters of the SHA-1 password hash'. The description mentions 'SHA-1 hash prefix' which aligns with the schema but adds no additional semantic details, syntax constraints, or format examples beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Look up') with a precise resource ('SHA-1 hash prefix') and scope ('HIBP k-Anonymity database'). It clearly distinguishes from the sibling 'check_password' tool by specifying the k-Anonymity range query method vs. a direct password check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The reference to 'k-Anonymity' implicitly signals this is for privacy-preserving password checks where only a hash prefix is exposed, but there is no explicit 'when to use' guidance or direct comparison to the sibling 'check_password' tool for agents to know which to choose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.check_promptARead-onlyIdempotentInspect
Detect prompt injection in text. Analyzes across 4 categories (direct injection, encoding tricks, exfiltration, indirect injection) with 200+ detection patterns. Designed for real-time inline usage before processing untrusted user input. Returns boolean verdict, confidence score (0-1), matched patterns with evidence, and decoded content if encoding obfuscation was detected. Response time <100ms p95.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The text to analyze for prompt injection | |
| context | No | Context hint for sensitivity: user-input (default), skill-prompt (higher tolerance), system-prompt (highest sensitivity) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish the safety profile (readOnly, idempotent, non-destructive). The description adds significant behavioral context beyond these hints: specific detection categories, performance SLA (<100ms p95), and detailed return value structure (boolean verdict, confidence score, matched patterns, decoded content) that compensates for the missing output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four dense sentences cover: purpose, methodology, usage context, and output/performance characteristics. Every sentence earns its place with zero redundancy. Information is front-loaded with the core action, followed by increasingly specific technical details. No filler or tautology.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description comprehensively details the return structure (boolean, confidence, evidence, decoded content) and performance characteristics. Combined with complete annotations and parameter schema, this provides sufficient information for invocation and response handling without gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both the 'prompt' and 'context' parameters. The description mentions 'text' and 'untrusted user input' which loosely map to the prompt parameter, but does not add syntax details, examples, or semantics beyond what the schema already provides. Baseline 3 is appropriate given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource pair ('Detect prompt injection in text') and immediately distinguishes from siblings (check_domain, check_email, etc.) by specifying the exact threat vector (prompt injection) and analysis methodology (4 categories, 200+ patterns). The scope is precisely defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit temporal and contextual guidance ('Designed for real-time inline usage before processing untrusted user input'), establishing when to invoke the tool. While it effectively implies distinction from batch-oriented siblings like full_scan via the 'real-time inline' qualifier, it does not explicitly name alternatives or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.check_urlARead-onlyIdempotentInspect
Check a URL for malware, phishing, and other threats. Uses URLhaus + heuristic analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to check (e.g. https://example.com) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnly, non-destructive, idempotent) and external dependencies (openWorld). The description adds valuable behavioral context by disclosing the analysis methodology ('URLhaus + heuristic analysis'), which helps the agent understand the data sources and reliability without contradicting the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes function and threat coverage; second sentence discloses implementation details. Information is front-loaded and appropriately sized for a single-parameter lookup tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, 100% schema coverage, clear annotations), the description covers the essential functional context. The only gap is lack of return value description since no output schema exists, though this is partially mitigated by the clear threat-focused purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage and a single parameter, the schema adequately documents the 'url' parameter including an example format. The description references checking 'a URL' but does not add semantic detail beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resource 'URL' and threat scope 'malware, phishing, and other threats.' The mention of 'URLhaus' specifically distinguishes this from sibling check_domain/check_ip tools by indicating URL-specific analysis methodology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the threat types checked are listed (malware, phishing), there is no explicit guidance on when to use this versus check_domain (for host-level only) or full_scan (for comprehensive analysis). The usage is implied by the URL-specific focus but lacks explicit when/when-not direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.full_scanARead-onlyIdempotentInspect
Run all security checks on a target (URL, domain, IP, or email). Most comprehensive scan.
| Name | Required | Description | Default |
|---|---|---|---|
| target | Yes | Target to scan — URL, domain, IP address, or email |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering safety and repeatability. The description adds value by specifying this performs 'security' checks specifically and notes the comprehensive scope, but omits behavioral context like scan duration, rate limits, or whether it operates asynchronously given the 'full' nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the action and input specification. The second sentence efficiently positions the tool against alternatives. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter schema, rich annotations covering safety/idempotency, and lack of output schema, the description adequately covers the essential selection criteria. Minor gap: given 'full scan' implies potentially long-running operation, mentioning duration or async behavior would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'target' parameter, the schema fully documents accepted inputs. The description echoes these types (URL, domain, IP, email) but does not add semantic depth regarding format requirements, validation rules, or examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb phrase 'Run all security checks' and identifies the resource as a 'target' with explicit accepted types (URL, domain, IP, email). The phrase 'Most comprehensive scan' clearly distinguishes it from sibling tools like check_domain or check_email which perform narrower checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description positions the tool effectively against siblings by claiming it is the 'most comprehensive scan' and runs 'all' checks, implying it should be used when broad coverage is needed versus specific checks. However, it does not explicitly state when NOT to use it (e.g., for quick single checks) or name specific alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shieldapi.scan_skillARead-onlyIdempotentInspect
Scan an AI agent skill/plugin for security issues across 8 risk categories (Snyk ToxicSkills taxonomy). Checks for prompt injection, malicious code, suspicious downloads, credential handling, secret detection, third-party content, unverifiable dependencies, and financial access patterns. Static analysis only — no code execution. Returns risk score (0-100), severity-ranked findings with file locations, and human-readable summary.
| Name | Required | Description | Default |
|---|---|---|---|
| files | No | Additional code files to analyze (max 20 files) | |
| skill | No | Raw SKILL.md content or skill name from ClawHub |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent status, but the description adds critical behavioral context: 'Static analysis only — no code execution' clarifies safety constraints, and it details the return structure (risk score 0-100, severity-ranked findings, file locations, human-readable summary) which compensates for the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: (1) purpose and taxonomy, (2) enumerated checks, (3) safety constraint, (4) return values. Information is front-loaded and every sentence earns its place. Appropriate length for the complexity of an 8-category security scanner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description thoroughly documents return values. It explains the risk scoring methodology, finding structure, and summary format. Combined with complete parameter schema coverage and behavioral annotations, the description provides everything needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters (skill, files). The description references 'skill' and 'file locations' implicitly aligning with the parameters, but does not add semantic detail beyond the schema. Baseline 3 is appropriate given the schema carries the full documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool 'Scan[s] an AI agent skill/plugin for security issues across 8 risk categories' — specific verb, specific resource, and scope. It clearly distinguishes from siblings like check_domain or check_email by targeting 'AI agent skill/plugin' specifically and referencing the Snyk ToxicSkills taxonomy.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While it lacks explicit 'when not to use' language, the description provides clear contextual boundaries by specifying the unique resource type (skills/plugins vs domains/IPs/passwords) and enumerating the 8 specific risk categories checked. This specificity makes the appropriate usage context unambiguous compared to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!