Skip to main content
Glama

Server Details

Email safety MCP server. Detects phishing, prompt injection, CEO fraud for AI agents.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

11 tools
analyze_email_threadA
Read-onlyIdempotent
Inspect

Analyze a full email conversation thread for escalating social engineering, scope creep, and manipulation patterns. $0.01/call for <=5 units (4000 tokens each); quote-first for larger threads. Via skyfire-api-key header (Skyfire Buyer API Key). By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
messagesYesThread messages in chronological order (min 2)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: pricing details ($0.01/call), token limits (4000 tokens each), authentication requirement (skyfire-api-key header), and legal terms (Terms of Service acceptance). While annotations already indicate read-only, non-destructive, idempotent behavior, the description provides practical implementation details that help the agent use the tool correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences that each serve distinct purposes: stating the analytical purpose, providing pricing/technical details, and specifying authentication/legal requirements. While slightly dense, every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only analysis tool with comprehensive annotations and full schema coverage, the description provides good contextual completeness. It covers purpose, practical constraints (pricing, tokens), authentication, and legal terms. The main gap is the lack of output schema information, but the description compensates reasonably well given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the 'messages' parameter requirements. The description doesn't add any parameter-specific information beyond what's in the schema, but the baseline score of 3 is appropriate since the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('analyze') and resources ('email conversation thread'), specifying the exact analytical focus ('escalating social engineering, scope creep, and manipulation patterns'). It distinguishes itself from sibling tools like 'check_email_safety' by focusing on thread-level behavioral patterns rather than individual message safety checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('analyze a full email conversation thread'), but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools. The pricing and token limit information implies usage for thread analysis, but lacks direct comparison guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

assess_messageA
Read-onlyIdempotent
Inspect

FREE triage tool — send whatever context you have (message content, sender info, URLs, attachments, draft replies, thread messages, image/video URLs) and get back a prioritized list of which security tools to run. No AI call, no charge, instant response. Always call this first to get the best security coverage.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyNoMessage body content
fromNoSender email address or identifier
urlsNoURLs to check (alternative to links)
linksNoURLs found in the message
mediaNoMedia attachments for non-email platforms
senderNoSender identifier for non-email platforms
draftToNoDraft reply recipient
replyToNoReply-To address if different from sender
subjectNoMessage subject line
imageUrlNoDirect image URL to check for AI generation
messagesNoArray of thread messages (2+ for thread analysis)
platformNoMessage platform (sms, whatsapp, slack, discord, telegram, etc.) — omit for email
videoUrlNoDirect video URL to check for AI generation
draftBodyNoDraft reply body — include to check for data leakage
imageUrlsNoMultiple image URLs to check
attachmentsNoAttachment metadata
knownSenderNoWhether the email sender is known/trusted
contactKnownNoWhether sender is a known contact
draftSubjectNoDraft reply subject
senderVerifiedNoWhether platform has verified the sender
senderDisplayNameNoSender display name for reputation check
previousCorrespondenceNoWhether there has been prior correspondence
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, idempotent operations, the description specifies that this is a 'FREE triage tool' with 'instant response' and 'no AI call, no charge'. It also clarifies the output format ('prioritized list of which security tools to run'), which isn't covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded. The first sentence establishes the core purpose and scope, the second explains benefits and constraints, and the third provides critical usage guidance. Every sentence earns its place with no wasted words, making it easy for an agent to quickly understand the tool's role.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (22 parameters, no output schema) and rich annotations, the description provides excellent contextual completeness. It explains the tool's role in the workflow, output format, and operational characteristics. The main gap is the lack of explicit output schema documentation, but the description does specify what the tool returns ('prioritized list of which security tools to run'), which partially compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 22 parameters thoroughly. The description provides high-level context about what can be sent ('message content, sender info, URLs, attachments, draft replies, thread messages, image/video URLs') but doesn't add specific parameter semantics beyond what's already in the schema descriptions. This meets the baseline expectation for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('triage', 'send', 'get back') and resources ('context', 'security tools'). It explicitly distinguishes this as a 'first' step tool that provides 'prioritized list of which security tools to run', differentiating it from the many specific security checking sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Always call this first to get the best security coverage' establishes clear precedence over alternatives. It also specifies when to use ('send whatever context you have') and mentions cost/performance benefits ('No AI call, no charge, instant response') that help the agent decide when this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_attachment_safetyA
Read-onlyIdempotent
Inspect

Assess email attachments for malware risk based on filename, MIME type, and size BEFORE opening/downloading. $0.01/call via skyfire-api-key header (Skyfire Buyer API Key). By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
attachmentsYesAttachment metadata to analyze (max 20)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses the cost ('$0.01/call'), authentication requirement ('via skyfire-api-key header'), legal terms ('accept the Terms of Service'), and advisory nature. Annotations already indicate read-only, open-world, idempotent, and non-destructive characteristics, but the description provides practical implementation details that aren't captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences that each serve distinct purposes: the core functionality, implementation details/cost, and legal/advisory context. There's no wasted language, and the most critical information (what the tool does) appears first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with comprehensive annotations and full schema coverage, the description provides excellent contextual completeness regarding cost, authentication, and advisory nature. The only gap is the lack of output schema, but the description adequately explains the tool's purpose and constraints without needing to detail return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the single 'attachments' parameter and its nested properties. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation but doesn't provide additional semantic context about how parameters affect the assessment.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Assess email attachments for malware risk') and the resource ('email attachments'), with explicit scope ('based on filename, MIME type, and size BEFORE opening/downloading'). It distinguishes from siblings like 'check_email_safety' or 'check_message_safety' by focusing specifically on attachments rather than broader email/message analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'BEFORE opening/downloading' indicates when to use it, and the advisory-only nature ('Advisory service only') clarifies its limitations. While it doesn't name specific alternative tools, it clearly defines the tool's scope (attachment safety assessment) which implicitly distinguishes it from sibling tools that analyze emails, messages, URLs, or other content types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_email_safetyA
Read-onlyIdempotent
Inspect

Analyze an email for phishing, social engineering, prompt injection, and other threats targeting AI agents. Returns verdict, risk score, threats, and recommended actions. $0.01/call via skyfire-api-key header (Skyfire Buyer API Key). By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyYesEmail body content
fromYesSender email address
linksNoURLs found in the email
subjectYesEmail subject line
attachmentsNoAttachment metadata
knownSenderNoWhether the sender is known/trusted
previousCorrespondenceNoWhether there has been previous email exchange
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it discloses cost ('$0.01/call via skyfire-api-key header'), service terms ('By using this service you accept the Terms of Service'), and advisory nature ('Advisory service only'), which are not covered by annotations. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details, cost, and terms in a single, efficient sentence. Every part adds value: threat types, output components, pricing, and legal context without redundancy. It's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema), the description is mostly complete. It explains the purpose, output (verdict, risk score, threats, recommended actions), and external dependencies (cost, API key, terms). However, it lacks details on error handling or response format, which could be useful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 7 parameters. The description does not add any parameter-specific details beyond what the schema provides, such as explaining how 'knownSender' or 'previousCorrespondence' affect analysis. Baseline score of 3 is appropriate as the schema handles parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze an email for phishing, social engineering, prompt injection, and other threats targeting AI agents.' It specifies the verb ('analyze'), resource ('an email'), and scope of threats, distinguishing it from siblings like check_attachment_safety or check_url_safety by focusing on comprehensive email threat analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: analyzing emails for threats targeting AI agents. It implies when to use it (for email safety checks) but does not explicitly state when not to use it or name alternatives among siblings, such as check_message_safety for non-email messages or check_sender_reputation for sender-specific checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_media_authenticityA
Read-onlyIdempotent
Inspect

Analyze an image or short video to assess whether it is AI-generated, deepfaked, or authentic. Uses multi-layer analysis including metadata forensics, error level analysis, ML-based AI detection, and noise pattern analysis. Returns a confidence-scored verdict with per-layer breakdown. $0.04/image (4 units x $0.01), $0.10/video (10 units x $0.01) via skyfire-api-key header. Results are best-guess estimates, not definitive. By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
mediaUrlYesURL of the image or video to analyze
mediaTypeNoType of media (auto-detected if omitted)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond this: it discloses cost details ($0.04/image, $0.10/video), authentication method (skyfire-api-key header), and limitations (results are best-guess estimates, advisory service only). This enhances transparency without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by technical and operational details. It is appropriately sized but includes some verbose elements (e.g., pricing breakdown and terms of service notice) that could be streamlined without losing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (media analysis with multiple techniques) and lack of output schema, the description provides good context: it explains the analysis methods, return format (confidence-scored verdict with per-layer breakdown), and limitations. However, it could better detail error handling or response structure for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing clear documentation for both parameters (mediaUrl and mediaType). The description does not add significant meaning beyond the schema, as it only mentions analyzing 'an image or short video' without elaborating on parameter usage or constraints. Baseline score of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze an image or short video to assess whether it is AI-generated, deepfaked, or authentic.' It specifies the verb ('analyze') and resource ('image or short video'), and distinguishes it from sibling tools (e.g., email or URL safety checks) by focusing on media authenticity analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through context (e.g., analyzing media for authenticity) but lacks explicit guidance on when to use this tool versus alternatives. It mentions cost and advisory nature, which provides some operational context, but does not specify scenarios where other tools might be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_message_safetyA
Read-onlyIdempotent
Inspect

Analyze non-email messages (SMS, WhatsApp, Instagram DMs, Discord, Slack, Telegram, LinkedIn, Facebook Messenger, iMessage, Signal) for platform-specific threats including smishing, wrong-number scams, OTP interception, impersonation, and crypto fraud. $0.01/call via skyfire-api-key header (Skyfire Buyer API Key). By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
mediaNoMedia attachments
senderYesSender identifier — phone number, username, handle, or display name
messagesYesArray of messages in chronological order (min 1, max 50)
platformYesMessage platform
contactKnownNoWhether the sender is in the agent's/user's contacts
senderVerifiedNoWhether the platform has verified the sender (blue checkmark, business account)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses pricing ('$0.01/call'), authentication requirements ('via skyfire-api-key header'), legal terms ('accept the Terms of Service'), and service limitations ('advisory service only'). While annotations cover safety (readOnly, non-destructive), the description provides practical implementation details and constraints that aren't captured in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose statement, implementation details, and disclaimer. Each sentence serves a distinct function with minimal redundancy. While highly efficient, it could be slightly more front-loaded by moving the pricing/authentication details to a separate section, but overall it's well-organized with zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex analysis tool with 6 parameters and no output schema, the description provides good contextual completeness. It covers purpose, scope, implementation requirements, and limitations. The main gap is the lack of information about return values or analysis format, which would be helpful since there's no output schema. However, the annotations provide safety context, and the description covers practical constraints well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 6 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema - it doesn't explain how parameters relate to threat analysis or provide usage examples. The baseline score of 3 reflects adequate but not enhanced parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('analyze') and resources ('non-email messages'), listing 10 specific platforms and 5 threat types. It explicitly distinguishes from email-focused siblings by stating 'non-email messages' and listing messaging platforms, differentiating from tools like 'analyze_email_thread' and 'check_email_safety'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('non-email messages' across listed platforms) and mentions it's an 'advisory service only', indicating its limitations. However, it doesn't explicitly state when NOT to use it or directly compare it to sibling tools like 'assess_message' or 'check_sender_reputation', which might handle similar threat analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_prompt_injection_dbA
Read-onlyIdempotent
Inspect

FREE — Query a database of known prompt injection attacks observed in the wild on agent social networks. Returns recent injection patterns, payloads, and threat classifications to help agents recognize and avoid manipulation. No charge, no authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoFilter by injection type
queryNoSearch term or pattern to look for in injection payloads
timeframeNoHow far back to search (default: 30d)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies 'No charge, no authentication required' (addressing cost and access), and mentions the database contains 'recent injection patterns, payloads, and threat classifications.' While annotations cover safety (readOnlyHint, non-destructive), the description enriches this with operational details about data recency and content types.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the tool's purpose and output, and the second adds operational context (cost, authentication). Every sentence adds value without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (covering safety and idempotency), and full schema coverage, the description is largely complete. It lacks details on output format (no output schema provided) and potential limitations like rate limits, but adequately covers purpose, usage context, and behavioral traits for a read-only query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description does not add specific parameter semantics beyond implying filtering capabilities ('Filter by injection type,' 'Search term or pattern'), which are already covered in the schema. Baseline score of 3 is appropriate as the schema carries the burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Query a database') and resources ('known prompt injection attacks'), distinguishing it from sibling tools focused on email, message, URL, or attachment safety. It explicitly identifies the domain of 'prompt injection attacks observed in the wild on agent social networks,' making its scope distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('to help agents recognize and avoid manipulation'), but does not explicitly state when not to use it or name alternatives among the sibling tools. It implies usage for threat intelligence purposes without direct comparisons to other safety-check tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_response_safetyA
Read-onlyIdempotent
Inspect

Check a draft email reply BEFORE sending for data leakage, social engineering compliance, and unauthorized disclosure. $0.01/call via skyfire-api-key header (Skyfire Buyer API Key). By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
draftToYesRecipient email address
draftBodyYesDraft reply body
draftSubjectYesDraft reply subject
originalBodyNoOriginal email body for context
originalFromNoOriginal sender address
originalSubjectNoOriginal email subject
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses the cost structure ('$0.01/call via skyfire-api-key header'), authentication requirements ('Skyfire Buyer API Key'), legal terms ('By using this service you accept the Terms of Service'), and service limitations ('Advisory service only'). While annotations cover safety aspects (readOnly, non-destructive), the description provides practical implementation details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: the first sentence establishes the core purpose, followed by essential operational details (cost, authentication, terms, limitation). Every sentence earns its place with no wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (safety checking service with cost and authentication) and rich annotations covering safety aspects, the description provides good contextual completeness. It explains the service nature, cost, and requirements. The main gap is the lack of output schema, so the description doesn't explain what the safety check results look like, but this is partially compensated by the clear purpose statement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 6 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check a draft email reply BEFORE sending') and the purpose ('for data leakage, social engineering compliance, and unauthorized disclosure'). It distinguishes this tool from siblings by focusing on pre-send safety checking of draft replies rather than general email analysis or other safety checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('BEFORE sending' a draft email reply) and mentions it's an 'Advisory service only.' However, it doesn't explicitly state when NOT to use it or name specific alternative tools from the sibling list for different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_sender_reputationA
Read-onlyIdempotent
Inspect

Verify sender identity and detect Business Email Compromise (BEC), spoofing, and impersonation. Includes live DNS DMARC and RDAP domain age checks at no extra cost. $0.01/call via skyfire-api-key header (Skyfire Buyer API Key). By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesSender email address
replyToNoReply-To address if different from sender
displayNameYesSender display name
emailSnippetNoFirst ~500 chars of email body for context
emailSubjectNoSubject of the email for context
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a safe, read-only, idempotent operation. The description adds valuable behavioral context beyond annotations: it discloses cost ($0.01/call), authentication method (skyfire-api-key header), live DNS/DMARC/RDAP checks, and legal terms acceptance. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core functionality, followed by operational details. It's appropriately sized with no redundant sentences, though the legal/terms statement could be slightly more integrated. Each sentence adds distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (security analysis with external API), rich annotations, and full schema coverage, the description is mostly complete. It covers purpose, cost, authentication, and checks performed. The main gap is lack of output format details (no output schema), but annotations provide safety context adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add any parameter-specific semantics beyond what the schema provides, but it does imply that email and displayName are core inputs for reputation checking, maintaining the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('verify', 'detect') and resources (sender identity, BEC, spoofing, impersonation). It distinguishes from siblings by focusing on sender reputation rather than general email/attachment/URL safety checks, making its scope explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (email security analysis) but doesn't explicitly state when to use this tool versus alternatives like 'check_email_safety' or 'assess_message'. It mentions being an 'advisory service only', which provides some boundary but lacks clear comparative guidance with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_url_safetyA
Read-onlyIdempotent
Inspect

Analyze one or more URLs for phishing, malware, redirects, and spoofing. Returns per-URL and overall verdicts. $0.01/call via skyfire-api-key header (Skyfire Buyer API Key). By using this service you accept the Terms of Service. Advisory service only.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlsYesList of URLs to analyze (max 20)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate it's read-only, non-destructive, idempotent, and open-world. The description adds valuable context beyond this: it discloses cost ('$0.01/call'), authentication requirement ('via skyfire-api-key header'), legal terms ('accept the Terms of Service'), and advisory nature, which are not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states purpose and output, second covers cost and authentication, third notes legal and advisory aspects. It is front-loaded with core functionality, though the cost and legal details could be slightly condensed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema) and rich annotations, the description is fairly complete. It covers purpose, output, cost, authentication, and limitations, but could benefit from more detail on return values or error handling to fully compensate for the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the schema fully documenting the 'urls' parameter (array of strings, min 1, max 20). The description adds no additional parameter details beyond what the schema provides, so it meets the baseline for high schema coverage without compensating with extra semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Analyze one or more URLs') and resources ('URLs') for specific threats ('phishing, malware, redirects, and spoofing'), distinguishing it from sibling tools like check_email_safety or check_attachment_safety by focusing on URLs rather than other content types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for URL safety analysis but does not explicitly state when to use this tool versus alternatives like check_sender_reputation or check_message_safety. It mentions the service is 'Advisory service only,' which provides some context but lacks clear guidance on specific scenarios or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_feedbackA
Read-onlyIdempotent
Inspect

FREE — Submit feedback about any Agent Safe tool you used. Helps us improve detection accuracy and tool quality. No charge, no authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
ratingYesYour rating of the tool's output
checkIdNoThe checkId returned by the tool you're rating (helps us link feedback to specific analyses)
commentNoOptional details about your experience — what worked well, what could improve, or what was missed
toolNameNoWhich tool you're giving feedback on (e.g. check_email_safety, check_url_safety)
agentPlatformNoYour agent platform (e.g. claude, cursor, openai, custom) — helps us optimize for your environment
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide comprehensive behavioral hints (readOnly, openWorld, idempotent, non-destructive). The description adds valuable context beyond annotations: it clarifies this is a 'FREE' service with 'no charge, no authentication required,' which provides practical implementation guidance not captured in the structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three tightly focused sentences. Each sentence earns its place: the first states the core action, the second explains the purpose, and the third provides critical implementation details. No wasted words, and the most important information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the comprehensive annotations (covering safety, idempotency, and world model) and 100% schema description coverage, the description provides exactly what's needed. It explains the tool's purpose, usage context, and practical constraints without duplicating information already available in structured fields. For a feedback submission tool with no output schema needed, this is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, but this is acceptable given the comprehensive schema documentation. The baseline score of 3 reflects adequate coverage through the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Submit feedback') and resource ('about any Agent Safe tool you used'), with explicit purpose ('Helps us improve detection accuracy and tool quality'). It distinguishes this tool from all sibling tools, which are safety-checking tools rather than feedback submission tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Submit feedback about any Agent Safe tool you used.' It also specifies important usage conditions: 'No charge, no authentication required,' which tells the agent when it's appropriate to invoke this tool without prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources