Skip to main content
Glama

Server Details

Free IPv4 lookups against a distributed attacker-observation corpus.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
TunnelMind/scry-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.7/5 across 12 of 12 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of the threat intelligence corpus: ASN rollup, campaign detail, IP lookup (single/bulk), country stats, recent activity, aggregate stats, timeseries, tool details, and top-N rankings. There is no functional overlap.

Naming Consistency5/5

All tools follow a consistent 'scry_' prefix with a noun/adjective (asn, campaign, campaigns, check, check_bulk, country, recent, stats, timeseries, tool, tools, top). The pattern is uniform and predictable.

Tool Count5/5

12 tools is well-scoped for a threat intelligence data source, covering individual lookups, bulk operations, aggregations, time series, and top-N queries without being excessive or sparse.

Completeness5/5

The tool surface covers the core domain completely: IP enrichment, bulk lookup, ASN/country rollups, campaigns, tools, recent activity, statistics, and timeseries. Noted exclusions (raw payloads, actor identities) are intentional and documented.

Available Tools

12 tools
scry_asnAInspect

Roll-up of corpus activity for a single ASN — observation count, distinct source IPs, actor count, scanner count, high-confidence actor count, and per-protocol breakdown.

ParametersJSON Schema
NameRequiredDescriptionDefault
asnYes
since_msNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. Discloses default for since_ms and return fields, but does not mention error handling, rate limits, or behavior for missing ASNs. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with separate sections for overview, usage, inputs, and returns. Every sentence adds value, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description fully specifies the return object. Parameter details are sufficient. Could mention pagination or limits, but appropriate for a simple query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds useful context: asn can have spaces (Cymru format) and since_ms defaults to 0 (all-time). This adds value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a 'roll-up of corpus activity for a single ASN' and lists the returned fields. It differentiates from siblings like scry_country or scry_campaign by focusing on ASN-level data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides two use cases: contextualizing an IP's ASN and reputation-scoring. Lacks explicit exclusions or alternatives, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_campaignAInspect

Single campaign detail by id (format: c[0-9a-f]{15}).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description details input format, output fields, and return shape. It implies a read operation, though does not explicitly state read-only or rate limits. Still highly informative.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise yet thorough: first sentence states purpose, then lists return fields, usage guidance, and parameter details. No wasted words, well organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description compensates by listing all return fields. Covers input, output, and usage. Complete for a single-object retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has one parameter with pattern; description adds a readable format and explains it's a 16-char campaign id. With 0% schema coverage, this fully describes the parameter meaningfully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns full profile for a single campaign by ID. Differentiates from sibling scry_campaigns which likely lists campaigns, so it is specific and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use cases: deep-dive after scry_campaigns, threat report, escalation context. While it doesn't state when not to use, the guidance is strong enough for correct selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_campaignsAInspect

Active threat campaigns — coordinated attacker activity that exceeds the noise floor. ≥5 distinct actors, ≥3 ASNs, ≤5 destination ports, ≥1h history.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
include_inactiveNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It discloses that the endpoint never lists individual member actors and describes constraints on campaign identification. However, it lacks information on ordering, pagination, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with sections for definition, usage, inputs, returns. Slightly verbose but every sentence adds value. Good front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Provides campaign definition, example scenarios, return fields, and sibling differentiation. No output schema, but return fields are described. Complete for a list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description fully explains both parameters: include_inactive defaults to false, limit defaults to 50 with max 200. Schema coverage is 0%, so description compensates completely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it lists active threat campaigns, defines what a campaign is, and gives examples. It distinguishes from sibling scry_campaign by noting that this endpoint returns summary data while scry_campaign provides full detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists three use cases: generating briefings, checking if activity is part of a larger operation, and SOC dashboard panels. Also directs to sibling tool for more detail when needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_checkAInspect

Returns Scry's corpus knowledge for a single IPv4 address: when it was first/last observed, observation count, protocols and ports targeted, ASN, country, category (actor/scanner/not_observed), and confidence_bucket (low/medium/high).

Use when an agent needs IP triage, hostility assessment, or risk signaling. Do NOT use for raw payloads (never exposed) or IPv6 (corpus is v4-only at v0.1).

ParametersJSON Schema
NameRequiredDescriptionDefault
ipYesIPv4 address (e.g. '8.8.8.8')
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavioral traits: cost (free), rate limits (60 req/min, 10x burst), latency (<50ms), and special handling of reserved IPs (short-circuited to 'not_observed'). This provides a complete view of the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-organized with clear sections (returns, when to use, inputs, returns, cost, latency). Every sentence adds value, and the most important information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists all return fields and their shape. It covers rate limiting, latency, and edge cases (reserved IPs). The usage guidelines provide context relative to sibling tools, making the description complete for a single-param lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a description for the ip parameter. The description adds value by explaining the reserved address short-circuit behavior, going beyond the schema's basic type and example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns Scry corpus knowledge for a single IPv4 address, listing specific data points (first/last seen, observation count, protocols, ports, ASN, country). It distinguishes from sibling tools like scry_asn and scry_check_bulk by focusing on a single IPv4 address.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Use this tool when' and 'Do NOT use this tool when' sections provide clear guidance. It specifies appropriate uses (assessing hostile IP, investigating connections) and exclusions (actor profiles, raw payloads, IPv6), helping the agent choose the right tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_check_bulkAInspect

Look up many IPv4 addresses in one request. Up to 100 IPs per call. Same per-IP shape as scry_check, keyed by IP.

ParametersJSON Schema
NameRequiredDescriptionDefault
ipsYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. Discloses return shape, error handling for invalid IPs, rate limit behavior (counts as one call), cost (free/anonymous), and typical latency (<300ms for 100 IPs).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections for use cases, inputs, returns, and additional notes. Every sentence adds value; no redundancy or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 1 parameter and no output schema, the description covers all relevant aspects: input specification, output description, error handling, limits, cost, latency, and usage guidance. Complete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds crucial meaning: specifies parameter is array of IPv4 strings, limits 1-100, describes return structure per IP. Fully compensates for lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it looks up many IPv4 addresses in one request, specifies resource (IPv4 addresses) and action, and distinguishes from sibling scry_check by noting same per-IP shape.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when and when not to use, including alternatives (scry_check for single IP, splitting for >100 IPs). Gives concrete use cases like triaging logs or enriching SIEM alerts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_countryAInspect

Roll-up of corpus activity by ISO country code. Same shape as scry_asn.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryYes
since_msNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose behavioral traits such as destructiveness, authentication needs, or rate limits. It only notes the shape is similar to scry_asn, which is insufficient for a tool without annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences: purpose, shape reference, and parameter list. It is front-loaded but could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter tool with no output schema, the description explains inputs and basic purpose, but lacks detail on output contents or behavior, relying on the scry_asn sibling for shape reference.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by specifying the ISO-3166-1 alpha-2 format for 'country' and the default value and meaning of 'since_ms' (all-time).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states "Roll-up of corpus activity by ISO country code" with a specific verb and resource, and distinguishes itself from sibling scry_asn by noting they share the same shape.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions it's "Useful for geo-scoped threat reporting," providing context, but lacks explicit guidance on when not to use it or alternatives beyond referencing scry_asn.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_recentAInspect

Recent observations feed — aggregated by source IP within a time window. Cursor-paginated via since_ms.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
countryNo
protocolNo
since_msNo
include_noiseNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses pagination via `since_ms` and `next_cursor_since_ms`, time window (default 1h, max 7d), and default values (limit 50, include_noise false). It does not explicitly state read-only/destructive hint, but it's implied. Good coverage, not perfect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with bullet points and clear sections. Slightly verbose but each sentence adds value. Could be more concise in listing parameters, but overall effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers pagination, filters, defaults, and usage. Lacks description of the output/response format (e.g., fields returned), which is important given no output schema. Adequate but not fully complete for a tool without annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds meaning for all 5 parameters: explains `since_ms` as cursor, defaults and limits for `since_ms` and `limit`, country as ISO alpha-2, and `include_noise` default. `protocol` is only described as optional filter. Overall compensates well for missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'Recent observations feed aggregated by source IP within a time window' with cursor pagination and filtering by protocol/country. It differentiates from siblings like scry_check (specific IP) and scry_stats (totals).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'Use this tool when' and 'Do NOT use' sections with specific scenarios and alternative tool names (scry_check, scry_stats, scry_timeseries). This is exemplary guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_statsAInspect

Returns aggregate Scry corpus telemetry: total observation count, distinct source IPs, first/last observation timestamps, last-24h activity, and per-protocol breakdowns. Useful as a liveness/density check before issuing per-IP queries — lets an agent decide whether the corpus has enough data to be authoritative.

Use this tool when:

  • An agent is planning a multi-step investigation and wants to know if Scry has corpus density worth querying.

  • You want a 'corpus health' signal in a dashboard or report.

Do NOT use this tool when:

  • You want details about a specific IP — use scry_check.

  • You want sensor fleet size or node identities — never exposed at any tier.

Inputs: none. Returns: total_observations, distinct_source_ips, first_seen_ms, last_seen_ms, observations_last_24h, distinct_source_ips_last_24h, by_protocol, as_of_ms. Cost: free, anonymous, rate-limited. Latency: <100ms typical.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description discloses cost (free, anonymous, rate-limited), latency (typical <100ms from 3 parallel D1 aggregate queries), and non-destructive nature. This fully informs the agent about behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with bullet points and sections, every sentence provides essential information with no redundancy. It is concise yet comprehensive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description fully covers return fields, cost, latency, and use context. It is complete for a zero-parameter stats tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema coverage. The description adds value by listing the exact return fields, which compensates for the lack of output schema. Baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool returns aggregate Scry corpus telemetry including specific metrics (total observations, distinct IPs, timestamps, last-24h activity, per-protocol). It clearly distinguishes from siblings by noting that for specific IP details one should use scry_check instead.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use scenarios (liveness/density check, multi-step investigation, dashboard/report) and when-not-to-use (specific IP queries, sensor fleet size), with clear alternatives named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_timeseriesAInspect

Bucketed observation counts over time. Detect bursts, plot trends, sanity-check whether attacker activity is rising or falling.

ParametersJSON Schema
NameRequiredDescriptionDefault
bucketNo
since_msNo
until_msNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description fully discloses aggregation, constraints (max 30-day range, max 720 buckets), and the return format including bucket structure. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with bullet points and sections, concise yet complete. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and moderate complexity, the description thoroughly covers purpose, usage, parameter details, constraints, and return format. It is self-sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema description coverage at 0%, the description compensates fully by explaining each parameter's defaults, options, and constraints (e.g., bucket enum, since_ms/until_ms defaults).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides bucketed observation counts over a time range for detecting bursts and trends. It explicitly distinguishes from sibling tool scry_stats for totals, making its unique purpose unmistakable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit 'Use this tool when' and 'Do NOT use this tool when' sections, with clear contexts (e.g., checking 'is something happening right now?') and specific alternative tools (scry_stats for totals).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_toolAInspect

Single tool detail by 16-char hex id from scry_tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description fully discloses return fields including status 'found'/'not_found', and implies no side effects. Transparent about behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with bullet points for usage, inputs, and returns. Every sentence adds value; no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a simple lookup tool: covers purpose, input format, return structure, and usage guidelines. No gaps given no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds critical context for the single parameter: '16-char hex tool id from scry_tools.' Compensates for 0% schema description coverage by explaining format and source.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves details for a single tool by ID. Distinguishes from sibling scry_tools (list) by noting 'Same fields as scry_tools list entries.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use scenarios: following up on a tool ID from scry_tools and verifying existence before referencing. Concise and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_toolsAInspect

List detected attack tools — (protocol, payload, path) tuples sent by 3+ distinct source IPs. Aggregate metadata only; never lists member actors.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
protocolNo
since_msNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description fully discloses return structure (id, protocol, actor_count, etc.), parameter defaults and constraints, and key behavior (never lists actors). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: definition, interpretation, use cases, inputs, return format, sibling reference, constraint. Each sentence adds value, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations or output schema, description covers purpose, parameters, return format, use cases, and constraints comprehensively. Adequate for agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description compensates by defining each parameter: protocol with examples, since_ms meaning, limit with default and max. Also explains return fields, adding significant value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Describes listing attack tools defined as (protocol, payload, path) tuples from 3+ distinct IPs, with clear interpretation of actor_count. Distinguishes from sibling scry_tool (detail by id) and other scry_ tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists three use cases (threat reports, shared infrastructure hunting, payload pivoting) and when not to use (actors not listed). References alternative scry_tool for detail.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scry_topAInspect

Top-N source dimensions over a time window. Useful for situational awareness — 'where is the noise coming from right now?'

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
since_msNo
dimensionNo
include_noiseNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description explains key behaviors: sorting order, the effect of include_noise on filtering scanners, and defaults for parameters. It does not explicitly state read-only nature, but the context implies it. Additional details on rate limits or data freshness would improve.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with bullet-pointed when-to-use/not-to-use lists and a clear parameter summary. Every sentence adds value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers usage, parameters, and return format adequately for a top-N query tool. Lacks mention of pagination or caching behavior, but given no output schema, the description is nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds significant meaning beyond the schema: defaults (dimension='asn', since='last 24h', limit=20), units for since (unix ms), and the meaning of include_noise. This compensates fully for the 0% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Top-N source dimensions over a time window' and provides specific use cases. It distinguishes from two siblings (scry_check, scry_timeseries) but not from others like scry_asn or scry_country, which may overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly lists when to use (briefing, situational awareness) and when not to use (specific IP, time-series), naming alternatives scry_check and scry_timeseries. This gives an agent clear decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.