Skip to main content
Glama

Server Details

AI marketing: SEO articles, trend scouting, competitor analysis, social media, lead magnets

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Citedy/citedy-seo-agent
GitHub Stars
4
Server Listing
SEO Content Autopilot by Citedy

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

58 tools
adapt.generateCInspect

Generate social adaptations for an article.

ParametersJSON Schema
NameRequiredDescriptionDefault
platformsYesTarget social platforms (1-3)
article_idYesUUID of the article to adapt
include_ref_linkYesInclude reference link to original article
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover basic safety (not read-only, not destructive), but description adds no context about persistence (saved to DB vs ephemeral return), idempotency behavior, or return format—critical gaps for a mutation tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence front-loaded with verb. No redundancy, though extreme brevity verges on under-specification rather than efficient conciseness given tool complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Incomplete given mutation nature and lack of output schema. Fails to describe what is returned (text content? draft IDs? metadata?) or success side effects, leaving agent without crucial invocation context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all three parameters. Description provides baseline context ('for an article') but adds no additional semantics like platform constraints or article status requirements beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Generate' and target resource 'social adaptations' (social media post variations) with input 'article'. Sibling context (social.*, article.*) makes scope inferable, though 'adaptations' is slightly idiomatic and could explicitly clarify 'social media posts'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use versus 'article.generate' (creates original content) or 'social.publish' (publishes to platforms). No mention of prerequisite article states or that output is typically fed into publish workflows.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent.healthA
Read-onlyIdempotent
Inspect

Return infrastructure health checks for agent platform.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds domain context ('infrastructure', 'agent platform') but omits details about what specific health metrics are returned, caching behavior, or potential latency of the checks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single front-loaded sentence of seven words. No redundancy, filler, or schema repetition. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple, parameter-less health check endpoint with full annotation coverage. While it adequately describes the return concept, it could minimally benefit from indicating the response structure (e.g., JSON with status indicators) given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters. Per scoring rules, zero parameters establishes a baseline score of 4. The description appropriately makes no mention of parameters since none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Return' + resource 'infrastructure health checks' + scope 'agent platform'. Clearly distinguishes from sibling 'agent.status' (operational status) and 'agent.me' (agent identity) by specifying the infrastructure focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when/when-not guidance or named alternatives. However, the specific phrasing 'infrastructure health checks' provides clear implied usage context for when to select this over agent.status.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent.meA
Read-onlyIdempotent
Inspect

Return agent profile, balances and limits.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive hints, so description burden is lower. It adds value by specifying the three data domains returned (profile, balances, limits) but omits rate limits, caching behavior, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, zero redundancy. Front-loaded with immediate value. Appropriate length for a parameterless introspection endpoint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given rich annotations (readOnly, idempotent) and zero parameters, the description adequately covers the tool's scope by naming the three returned data categories. Would benefit from output format hints absent an output schema, but sufficient for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters. Per rubric, 0 params = baseline 4. Description appropriately implies no configuration needed by omitting parameter references entirely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Return' with clear resource enumeration (profile, balances, limits). Effectively distinguishes from siblings agent.health and agent.status which imply operational metrics rather than identity/financial data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance or differentiation from agent.health/agent.status. While the return values imply usage context, the description lacks explicit guidance on selecting this tool over operational-status alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

agent.statusA
Read-onlyIdempotent
Inspect

Return actionable operational status snapshot (credits, socials, schedule, knowledge, content).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent safety profile. Description adds value by specifying the five specific data categories returned in the snapshot, though it omits details like cache behavior or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb. Parenthetical list efficiently communicates five data domains with zero waste. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and rich annotations, the description is nearly complete. It compensates for missing output schema by listing content categories, though it could clarify response structure/format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, triggering baseline score of 4. Description appropriately omits parameter discussion as there are none to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Return' with clear resource 'operational status snapshot' and distinguishes from siblings agent.health (system health) and agent.me (profile) by listing specific domains covered: credits, socials, schedule, knowledge, content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through the parenthetical list of returned data categories, but lacks explicit when-to-use guidance or named alternatives (e.g., when to prefer agent.health vs agent.status).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

article.deleteAInspect

Permanently delete an article and its associated storage files.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully signals destructive behavior through 'permanently delete' and discloses cascade effects via 'associated storage files.' It lacks explicit warnings about irreversibility or permission requirements, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is perfectly efficient: 'permanently' establishes destructiveness, 'delete' states the action, 'article' identifies the resource, and 'associated storage files' clarifies scope. No words are wasted.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter destructive operation without an output schema, the description adequately covers the core behavior and side effects. It could be improved by explicitly warning that deletion is irreversible or clarifying the relationship to unpublishing, but it meets the minimum viable threshold for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with the 'id' parameter fully documented as a UUID. The description adds no parameter-specific guidance, but the baseline score of 3 applies when schema coverage is complete and the description focuses on behavioral semantics rather than parameter syntax.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'delete' with the resource 'article' and scope 'associated storage files.' The modifier 'permanently' effectively distinguishes this from the sibling tool 'article.unpublish,' signaling this is for irreversible removal versus temporary withdrawal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The word 'permanently' implies this should be used for final removal rather than temporary unpublishing, but the description does not explicitly name 'article.unpublish' as an alternative or specify prerequisites (e.g., whether the article must be unpublished first).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

article.generateAInspect

Generate an SEO-optimized article. By default publishes immediately; set auto_publish=false to create as draft. May take 30-90 seconds.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesGeneration mode: standard or turbostandard
sizeYesArticle size presetstandard
audioYesGenerate audio voice-over for the article
topicNoArticle topic or title
personaNoWriting persona slug from available personas
languageYesArticle language code, e.g. en, es, deen
source_urlsNoSource URLs to base the article on (1-3)
auto_publishNoPublish article immediately after generation. When false, article stays as draft (status=generated). Default uses tenant setting.
enable_searchYesEnable web search (turbo mode only)
illustrationsYesGenerate illustrations for the article
disable_competitionYesSkip competitor analysis step
wait_for_completionYesWait synchronously for article completion before returning
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate mutation (readOnlyHint=false). Description valuably adds timing expectations (30-90s) and explains the state outcome (draft vs published). Does not contradict annotations and adds practical runtime context beyond the structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose front-loaded, critical side effect (publishing) second, performance characteristic (timing) last. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 12 parameters and no output schema, a generation tool warrants more detail. While timing and publishing behavior are covered, the description omits what the tool returns (article ID, content object), failure/retry behavior, or prerequisite requirements (e.g., persona validation). Adequate but not complete for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage, establishing baseline 3. Description references auto_publish behavior explicitly, reinforcing the schema, but primarily restates existing schema descriptions rather than adding rich semantic context for the 12 parameters (e.g., implications of mode, size, or wait_for_completion).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Generate) and resource (SEO-optimized article) clearly. The 'SEO-optimized' qualifier adds specificity, though it does not explicitly distinguish from sibling tools like 'adapt.generate' or 'gaps.generate'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage guidance for the auto_publish parameter (when to set false for drafts) and warns about timing (30-90 seconds), implying when not to use for instant needs. However, lacks explicit comparison to sibling tools or alternatives like using article.publish separately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

article.getAInspect

Poll a queued article job by id. Use the id returned by article.generate to get the current status or the final generated article result.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable context that this involves async 'queued' jobs with state transitions ('current status or final result'). Missing disclosure of polling frequency recommendations, error states, or whether results are ephemeral.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-crafted sentences. First sentence front-loads the action (Poll). Second sentence immediately provides the usage dependency. Zero redundancy, every word serves the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a single-parameter polling tool with no output schema. Covers the essential relationship with article.generate and the async job lifecycle. Lacks only details on retry behavior or timeout expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'Unique identifier (UUID)'. Description adds critical semantic context: this specific ID comes from 'article.generate', establishing parameter provenance beyond the generic schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Poll' + resource 'queued article job' + scope 'by id'. Clearly distinguishes from siblings like article.generate (creates), article.delete (removes), and article.list (enumerates) by focusing on status retrieval for async jobs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use the id returned by article.generate'. Provides clear workflow context (polling pattern). Could be improved by mentioning when NOT to use (e.g., don't use for non-job IDs) or alternatives for non-async retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

article.listB
Read-onlyIdempotent
Inspect

List previously generated articles for the current workspace.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax articles to return, 1-100
offsetNoPagination offset
statusNoFilter by article status
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, non-destructive, idempotent behavior. The description adds valuable scope constraints ('current workspace', 'previously generated') but omits behavioral details like pagination defaults, rate limits, or what constitutes a 'generated' article versus drafts.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with no filler. However, extreme brevity misses opportunity to front-load pagination behavior or status filtering capability, which would help agents understand the tool's retrieval pattern without reading the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple list operation with good schema coverage and safety annotations, but gaps exist: no mention of pagination pattern despite limit/offset params, no valid status values enumerated, and no indication of what fields are returned (full articles vs metadata).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (limit, offset, status). The description mentions none of them, earning the baseline score of 3 per rubric guidelines for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('List'), specific resource ('previously generated articles'), and scope ('current workspace'). However, it fails to distinguish from sibling tool 'article.get' (likely for single-article retrieval), which could cause selection ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this listing tool versus 'article.get' for single retrieval, or when to apply the 'status' filter versus retrieving all articles. No alternatives or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

article.publishAInspect

Publish a draft article. Use after generating with auto_publish=false to trigger the publish pipeline.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable context that this 'trigger[s] the publish pipeline' (indicating async/multi-step behavior), but lacks safety disclosure (state mutation, irreversibility, auth requirements) that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence defines the action; second provides the critical workflow context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a single-parameter state-change tool with no output schema. Covers the essential generate→publish workflow gap. Could improve by mentioning error conditions (e.g., already published) or relationship to article.unpublish.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Unique identifier (UUID)'), establishing baseline 3. Description implies the ID refers to the draft article from the generation step but doesn't explicitly state 'the ID of the article to publish' or reference the parameter directly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Publish') and resource ('draft article'). Implicitly distinguishes from article.generate by referencing the auto_publish=false workflow, though it doesn't explicitly contrast with sibling publishing tools (leadmagnet.publish, shorts.publish) or state what 'publish' means (make public/live).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: 'Use after generating with auto_publish=false' clearly indicates when to use this tool versus the alternative workflow (auto_publish=true). Also clarifies the sequencing relationship with article.generate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

article.unpublishAInspect

Unpublish an article (revert to draft status). The article remains accessible for editing but is removed from the public blog.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Successfully discloses state transition (published→draft), visibility impact (removed from public), and persistence (remains accessible). Lacks explicit mention of idempotency, what occurs if article is already draft, or specific return value structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads the core action and state outcome. Second sentence efficiently clarifies persistence and visibility scope. No redundant phrases or tangential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a single-parameter mutation tool without output schema. Covers action, state change, and side effects (visibility/persistence). Minor gap regarding error states or behavior when targeting already-unpublished articles, but sufficient for tool selection and basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'id' parameter fully documented as 'Unique identifier (UUID)'. Description adds no parameter details, but this is acceptable since the schema completely covers the single required parameter. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Unpublish' with resource 'article' clearly stated. Parenthetical '(revert to draft status)' clarifies the exact state change. Second sentence distinguishes from 'article.delete' by emphasizing the article persists and remains editable, while distinguishing from 'article.publish' by noting removal from public view.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through 'removed from the public blog' and 'remains accessible for editing,' allowing inference that this preserves content unlike deletion. However, lacks explicit guidance like 'use this instead of article.delete when you want to retain the article for later editing' or mention of when to use vs. article.publish.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brand.scanAInspect

Run AI brand visibility scan across major LLM providers (async, poll with brand.scan.get).

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNameYesBrand name to scan across LLM providers
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral context not in annotations: 'async' implies non-immediate execution and the polling workflow explains how to retrieve results. However, it omits mentioning that the operation is non-idempotent (per annotations) or what happens if the brandName is invalid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with zero waste. The parenthetical efficiently packs usage guidance without cluttering the main action description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the async initiation pattern and references the polling tool. Lacks description of the return value (presumably a scan/job ID) which would be helpful given no output schema is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'brandName' fully documented. The description adds no explicit parameter guidance, so it meets the baseline for high-coverage schemas where the schema carries the semantic burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Run' and resource 'AI brand visibility scan' clearly define the action. The phrase 'across major LLM providers' defines the scope, and the explicit reference to sibling tool 'brand.scan.get' distinguishes this initiator from the result-fetching tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The parenthetical '(async, poll with brand.scan.get)' provides explicit workflow guidance, indicating this starts a long-running operation and directing the agent to the specific tool for retrieving results. This prevents misuse and clarifies the async pattern.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

brand.scan.getA
Read-onlyIdempotent
Inspect

Get brand scan status and final result payload by scan ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish safety (readOnly/idempotent), but the description adds valuable behavioral context: it reveals this is an async operation ('status') that yields a final payload upon completion, helping the agent understand the polling pattern without explicit output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action and resource. No redundant words; 'by scan ID' placement efficiently conveys the lookup mechanism. Zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple retrieval tool with 1 parameter and strong annotations. Mentioning both 'status' and 'final result payload' partially compensates for missing output schema by clarifying what data categories are returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with UUID pattern and description already present. The description maps the parameter concept ('scan ID') to the schema field 'id', but adds no additional formatting guidance or examples beyond what the schema provides (baseline 3).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get'), resource ('brand scan status and final result payload'), and scope constraint ('by scan ID'). The phrase 'by scan ID' distinguishes this from sibling 'brand.scan' (likely the initiator) by implying this requires an existing identifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context through 'by scan ID' (indicating you need an ID from a prior operation), but lacks explicit workflow guidance such as 'use this to poll for completion after initiating a scan' or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

competitors.discoverCInspect

Discover competitors by keyword set.

ParametersJSON Schema
NameRequiredDescriptionDefault
keywordsYesKeywords to discover competitors for (1-5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations indicate openWorldHint=true and readOnlyHint=false (external calls with potential side effects), the description adds no behavioral context about what gets written, external data sources used, or latency expectations. The word 'Discover' implies search but doesn't clarify the side effects implied by readOnly=false.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at 5 words with front-loaded verb. No redundant phrases. However, given the lack of output schema and sibling ambiguity, the brevity may underserve the user; it stops short of being 'complete' enough for a 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description fails to explain return values or result structure. Also omits differentiation from 'competitors.scout' and doesn't explain what writes/storage occur given readOnlyHint=false. Significant gaps for a tool with external dependencies.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline applies. The description mentions 'keyword set' which simply echoes the schema's 'keywords' parameter description without adding syntax details, format examples, or semantic constraints beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific verb (Discover) and resource (competitors) along with the input mechanism (keyword set). Clear and direct, though it fails to differentiate from the sibling tool 'competitors.scout' which likely overlaps in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus 'competitors.scout' or other alternatives. No mention of prerequisites, rate limits, or workflow context. Single sentence contains only functional definition.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

competitors.scoutCInspect

Analyze a competitor domain.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesAnalysis mode: fast or ultimatefast
topicNoFocus topic for competitor analysis
domainYesCompetitor domain URL to analyze
languageYesContent language code, e.g. en, es, deen
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false, openWorldHint=true, and idempotentHint=false, implying side effects and external network calls. The description mentions none of this, leaving agents unaware that results may be stored, costs incurred, or that retries are unsafe. With annotations present, it adds minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at four words. While not verbose, it is insufficiently informative given the tool's behavioral complexity and side effects. Structure is front-loaded but lacks supporting detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, implied side effects (readOnly=false), external data access (openWorld=true), and no output schema, the description should explain what data sources are queried and what constitutes the analysis output. It provides none of this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description only implicitly references the 'domain' parameter but does not add semantic meaning beyond the schema for 'mode', 'topic', or 'language' (e.g., explaining that 'ultimate' mode provides deeper analysis).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Analyze') and resource ('competitor domain'), but lacks scope specificity. It fails to differentiate from the sibling 'competitors.discover' or explain what type of analysis is performed (e.g., SEO, content, technical).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus 'competitors.discover' or other scout tools (scout.reddit, scout.x). No mention of prerequisites or when the 'fast' vs 'ultimate' modes should be selected.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gaps.generateCInspect

Generate SEO/GEO content gap opportunities.

ParametersJSON Schema
NameRequiredDescriptionDefault
competitor_urlsYesCompetitor website URLs to analyze (1-5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With 'openWorldHint=true' and 'readOnlyHint=false' annotations indicating external data mutation, the description adds no behavioral context about what 'generate' entails—analyzing URLs, creating records, latency, or idempotency (confirmed false in annotations but unmentioned). It relies entirely on structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At four words, it is terse to the point of underspecification. While not verbose, the complete absence of a title and the lack of a second sentence explaining output format or side effects leaves it structurally incomplete despite brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks explanation of what 'GEO' (Generative Engine Optimization) means, what constitutes an 'opportunity' object, or the analysis methodology. No output schema exists, yet the description fails to hint at return format (report vs. objects), leaving critical gaps for an AI agent attempting to handle the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('Competitor website URLs to analyze'), so baseline applies. The description provides no additional parameter semantics, examples, or validation rules beyond the schema's minItems/maxItems constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Generate') and resource ('SEO/GEO content gap opportunities'), clearly indicating the tool analyzes competitor content to identify gaps. However, it fails to explicitly distinguish from sibling tool 'gaps.list' (which likely retrieves existing gaps), leaving ambiguity about creation vs. retrieval workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like 'gaps.list' or 'competitors.discover'. No prerequisites mentioned (e.g., needing valid competitor URLs) and no warnings about the 'openWorldHint' requirement for external data fetching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gaps.listB
Read-onlyIdempotent
Inspect

List saved content gaps.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already establish this as a read-only, non-destructive, idempotent operation, which the description supports by emphasizing 'saved' content. However, the description adds no details about pagination behavior, response format limits, or what constitutes a 'content gap' in this domain, leaving behavioral gaps despite the safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The four-word description is appropriately front-loaded with no redundant or wasted language. Given the simplicity of a parameterless list operation, this length is ideal—it states the action and resource without unnecessary elaboration that would duplicate the annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While sufficient for a basic read-only list tool, the description lacks output format details that would be helpful given the absence of an output schema. It also omits domain context about what 'content gaps' represent and whether the returned list supports filtering or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters in the input schema, there are no semantic gaps to fill regarding arguments or their meanings. The baseline score applies as the schema provides complete coverage trivially, and no parameter explanation is required in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'List' with the resource 'saved content gaps', clearly indicating a retrieval operation. The inclusion of 'saved' implicitly distinguishes this from the sibling 'gaps.generate' tool, though it does not explicitly name the alternative or explain the differentiation in usage contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to select this tool versus alternatives such as 'gaps.generate' or 'schedule.gaps'. There are no stated prerequisites, conditions, or exclusion criteria to help an agent determine when listing is preferable to generating new gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gsc.reportAInspect

Get Google Search Console performance report: clicks, impressions, CTR, avg position, top queries, top pages, position movers, content opportunities, and article suggestions. Returns connect URL if GSC is not linked. Free (0 credits).

ParametersJSON Schema
NameRequiredDescriptionDefault
force_refreshNoForce a fresh GSC data fetch instead of returning cached morning report
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the auth fallback behavior (returns connect URL when unlinked), cost structure (0 credits), and default caching behavior ('cached morning report' implies the non-forced fetch). Could be improved with error handling or data format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place: first sentence defines scope and return values, second covers auth edge case, third covers cost. No redundant words or generic filler. Efficiently front-loaded with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description comprehensively lists report contents (metrics, queries, pages, suggestions) and explains the dual return behavior (report vs. connect URL). Minor gap: doesn't specify response format or structure, though field names imply the content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single 'force_refresh' parameter, establishing baseline 3. The description adds conceptual context by referencing 'cached morning report' which aligns with the parameter's function, but does not add syntax, format, or additional semantic details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' + resource 'Google Search Console performance report' and enumerates exact data points returned (clicks, impressions, CTR, etc.). The explicit 'Google Search Console' branding clearly distinguishes it from sibling SEO tools like 'seo.links.analyze' or 'seo.headings.check'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context through cost disclosure ('Free (0 credits)') and auth state handling ('Returns connect URL if GSC is not linked'), but lacks explicit comparison to sibling tools or clear when-to-use/when-not-to-use guidance against alternatives like 'gaps.generate' or 'competitors.scout'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ingest.batchCInspect

Create ingestion jobs in batch mode for multiple URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlsYesArray of 1-20 source URLs to ingest in batch
callback_urlNoOptional HTTPS webhook URL for batch completion
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations indicate this is a non-readonly, non-idempotent, open-world operation, the description fails to explain what ingestion entails (fetching external content?), whether processing is asynchronous (implied by callback_url parameter but not stated), or job lifecycle details. With openWorldHint=true, it should disclose external fetch behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, compact sentence with the action front-loaded. However, extreme brevity leaves gaps in behavioral context. No filler words, though arguably undersized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema and lack of output schema, the description covers the basic function but should mention the asynchronous/job-based nature implied by callback_url and openWorldHint. Adequate but minimal for a batch job creation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'multiple URLs' which aligns with but doesn't extend beyond the schema's 'Array of 1-20 source URLs'. It adds no semantic context for callback_url (e.g., webhook on completion) or URL format requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('Create') and resource ('ingestion jobs'), and specifies 'batch mode for multiple URLs' which distinguishes it from the likely single-URL sibling 'ingest.create'. However, it doesn't explicitly contrast with siblings or clarify the scope of 'ingestion'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use batch ingestion versus the single-URL 'ingest.create', or prerequisites like URL accessibility. It omits the 20-URL limit mentioned in the schema and doesn't explain when the callback_url should be provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ingest.content.getA
Read-onlyIdempotent
Inspect

Fetch extracted content payload for a completed ingestion job.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint, destructiveHint), so the description's burden is lighter. It adds value by specifying the return is the 'extracted content payload' (the actual data) rather than job metadata. It does not contradict annotations and doesn't mention size limits or payload format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence (7 words) with action verb front-loaded. No redundant words or repetition of schema/annotation data.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter read tool with complete annotations and schema, the description adequately covers the domain context (content extraction from completed jobs). No output schema exists, but the description hints at the return value type ('payload').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the 'id' parameter fully documented as a UUID. The description adds no parameter-specific details, but with comprehensive schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Fetch' and clearly identifies the resource as 'extracted content payload' for a 'completed ingestion job'. This implicitly distinguishes it from sibling ingest.get (likely job metadata) by specifying content retrieval, though it doesn't explicitly contrast with alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'completed ingestion job' implies a prerequisite state (don't use on running jobs), providing implicit workflow context. However, it lacks explicit guidance on when to use this vs. ingest.get or the full workflow sequence (create → get → content.get).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ingest.createBInspect

Create an ingestion job from a source URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesSource URL to ingest (YouTube, web, PDF, audio)
callback_urlNoOptional HTTPS webhook URL for completion notification
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While the description doesn't contradict annotations (correctly implying a write operation with 'Create'), it adds minimal behavioral context beyond the structured hints. It mentions 'job' implying async processing but fails to clarify the non-idempotent nature (duplicate jobs possible), webhook callback behavior, or error handling for inaccessible URLs despite openWorldHint=true indicating external resources.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is extremely compact with no wasted words. However, it may be overly concise for the tool's complexity—it front-loads the action but lacks supporting context that would typically follow for an async job creation endpoint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this creates async jobs (implied by 'job') with webhook callbacks and returns no output schema documentation, the description is insufficient. It omits critical lifecycle context: what the tool returns (presumably a job ID), processing expectations, polling vs. callback patterns, and relationship to 'ingest.get' for status checking.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the JSON schema fully documents both parameters including the callback_url's purpose. The description itself mentions no parameters, but meets the baseline expectation since the schema carries the full semantic load. No additional parameter guidance (e.g., URL format restrictions beyond schema) is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear specific verb ('Create') and resource ('ingestion job') and identifies the source input ('URL'). However, it fails to distinguish from the sibling tool 'ingest.batch', leaving ambiguity about when to use single vs. batch ingestion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., 'ingest.batch'), nor does it mention prerequisites, authentication requirements for external URLs, or expected processing time. It offers no 'when-not-to-use' exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ingest.getB
Read-onlyIdempotent
Inspect

Get ingestion job status/result by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, establishing safety. The description adds that the tool retrieves both 'status' and 'result', hinting at job lifecycle data, but lacks details on completion states, error conditions, or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 7 words with no filler. Front-loaded with verb and resource. Minor deduction for 'status/result' slash construction which slightly obscures whether it returns status, result, or both distinctly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter read operation with good annotations, the description is minimally adequate. However, it lacks explanation of the ingestion job lifecycle, what status values to expect, or how this relates to 'ingest.create', leaving gaps for agent reasoning.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the UUID parameter fully documented including format constraints. The description mentions 'by ID' which aligns with the schema but adds no additional semantic context (e.g., 'ID returned from ingest.create'). Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' and resource 'ingestion job status/result', making the basic purpose clear. However, it does not explicitly differentiate from sibling 'ingest.content.get' (which presumably retrieves content rather than job metadata), though 'status/result' implies this distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, or that it should be called after 'ingest.create' to poll for job completion. No mention of polling patterns or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

leadmagnet.archiveB
Idempotent
Inspect

Archive a lead magnet by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare idempotentHint=true and destructiveHint=false, confirming the operation is safe and reversible. The description aligns with these hints by using 'Archive' instead of 'Delete', but it doesn't add context about reversibility, side effects, or whether archived items can be retrieved via 'leadmagnet.get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero redundancy. It front-loads the action verb and wastes no words, which is appropriate for a single-parameter operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (one UUID) and helpful annotations covering safety/idempotency, the description is minimally sufficient. However, it lacks explanation of the archive state machine (e.g., can archived items be unarchived?) and omits output expectations, leaving minor gaps for a state-modification tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter 'id' is fully documented in the schema as 'Unique identifier (UUID)'. The description mentions 'by ID' which acknowledges the parameter but adds no semantic depth beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Archive') and resource ('lead magnet') with clear scope ('by ID'). However, it doesn't clarify the archive semantics—whether this hides the item from public view, moves it to cold storage, or acts as a soft delete—which would help distinguish it from the 'publish' sibling tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'leadmagnet.publish' or if there are prerequisites. The description merely restates the tool's name without workflow context or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

leadmagnet.generateAInspect

Start lead magnet generation (checklist/swipe/framework) and return polling instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesLead magnet type
nicheNoTarget niche or industry
topicYesLead magnet topic or subject
languageYesContent language codeen
platformYesTarget social platform for distributiontwitter
auto_publishYesAutomatically publish after generation
generate_imagesYesGenerate cover images for lead magnet
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false (write operation) and destructiveHint=false, but the description adds crucial behavioral context: it reveals this is an async operation requiring polling (not immediate completion) and identifies the return value type (polling instructions). This is significant added value beyond the structured annotations, though it omits details like idempotency risks or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero redundancy. It front-loads the action verb 'Start,' immediately states the resource, uses parentheses to concisely enumerate valid options, and ends with the return value pattern. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter async generation tool with no output schema, the description provides the minimum viable context by mentioning the polling pattern. However, given the complexity (multiple content types, image generation options, auto-publish flag), it could improve by briefly explaining what the polling instructions contain or the expected lifecycle of the generated resource.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are fully documented in the schema (type descriptions, enums, defaults). The description parenthetically repeats the three enum values for 'type,' which provides confirmatory context but doesn't add semantic depth beyond what the schema already defines. Baseline 3 is appropriate given comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start lead magnet generation'), specifies the resource, and lists the supported subtypes (checklist/swipe/framework) in parentheses. The mention of 'return polling instructions' distinguishes this from sibling tools like leadmagnet.get (retrieval) and leadmagnet.publish (distribution), though it doesn't explicitly name those alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies asynchronous usage by mentioning 'polling instructions,' signaling that this initiates a background job rather than returning immediate results. However, it lacks explicit guidance on when to use this versus leadmagnet.get or prerequisites like required account setup, leaving usage somewhat implied rather than explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

leadmagnet.getA
Read-onlyIdempotent
Inspect

Fetch lead magnet status/result by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds 'status/result' hinting at return value semantics (helpful since no output schema exists), but doesn't disclose error behavior, rate limits, or cache behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact at 6 words with front-loaded verb 'Fetch'. Zero redundancy, though slightly terse given null title. Could benefit from one clarifying clause about return value structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter read operation. Mentions 'status/result' compensating somewhat for absent output schema. Sibling differentiation is clear from name and description combination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'id' well-described as 'Unique identifier (UUID)'. Description references 'by ID' but adds minimal semantic depth beyond the schema. Baseline 3 appropriate given schema carries the load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Fetch' with clear resource 'lead magnet status/result' and scope 'by ID'. It clearly distinguishes from siblings: generate (create), publish (state change), and archive (removal) by specifying this is a retrieval operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage pattern (retrieve specific item by ID vs generation), but lacks explicit guidance on when to use relative to leadmagnet.generate workflow or alternatives like listing. No explicit 'when-not' or prerequisites stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

leadmagnet.publishC
Idempotent
Inspect

Publish a lead magnet by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description does not contradict the annotations (idempotent write operation, non-destructive). However, it fails to leverage the description to explain what 'publish' means in this domain (e.g., 'makes the lead magnet publicly accessible', 'changes status from draft to live') or clarify the idempotent behavior that the annotation hints at.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only 5 words. While there is no extraneous text or repetition, the brevity borders on under-specification. It is appropriately front-loaded with the action verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input and the presence of idempotency/safety annotations, the description is minimally adequate. However, for a state-transition operation with an opposite sibling (`archive`), it should explain the publication semantics and reversibility to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the `id` parameter is fully documented as 'Unique identifier (UUID)'), the baseline is 3. The description mentions 'by ID' but adds no syntax guidance, examples, or semantic clarification beyond what the schema's pattern and format already provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic action (publish) and resource (lead magnet) with the ID parameter mentioned. However, it offers no scope to distinguish from sibling operations like `article.publish` or explain the business logic difference between `leadmagnet.publish` and `leadmagnet.archive`/`generate`. It restates the tool name with minimal elaboration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus `leadmagnet.generate` (create) or `leadmagnet.archive` (likely the reverse operation). There is no mention of prerequisites, such as whether the lead magnet must be in a specific draft state before publishing, or workflow sequencing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

personas.listB
Read-onlyIdempotent
Inspect

List available writing personas.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover the safety profile (readOnly, idempotent, non-destructive), so the description carries a lower burden. It adds valuable domain context by specifying 'writing' personas, helping the agent understand this relates to content creation. However, it lacks operational details like pagination, caching behavior, or the structure/format of returned persona objects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence with zero redundant words. Every word earns its place—'List' identifies the operation, 'available' indicates these are pre-configured options, and 'writing personas' specifies the domain and resource type.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only listing tool with comprehensive safety annotations, the description is minimally adequate. However, without an output schema, it should ideally clarify what constitutes a 'persona' (e.g., tone, voice parameters) or indicate that results are used with generation tools like article.generate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage (vacuously true), the baseline score applies. The description correctly implies no filtering or configuration is needed, consistent with an empty parameter schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb (List) and identifies the resource precisely as 'writing personas,' distinguishing it from other list operations like products.list or article.list. However, it does not explicitly differentiate functionality from siblings within the same domain (e.g., how this differs from products.list beyond the resource type).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to invoke this tool versus alternatives, nor does it mention prerequisites. Given siblings like adapt.generate and article.generate likely consume these personas, explicit guidance linking this listing tool to those consumption tools would be valuable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

products.createBInspect

Upload product knowledge document.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesProduct/document title
contentYesFull product description or document text
source_urlNoOriginal source URL
source_nameNoOriginal source filename
source_typeNoHow the product was added
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly=false and destructive=false, confirming this is a safe write operation. The description adds domain context by specifying 'product knowledge document' but omits behavioral details like the implications of idempotentHint=false (duplicate creation risk) or post-upload processing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at four words with zero redundancy. However, for a mutation tool with 5 parameters and sibling alternatives, the brevity approaches under-specification despite lacking waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given complete parameter schemas and annotations, but lacks completeness regarding operational behavior (idempotency warnings) and differentiation from similar ingestion tools in the sibling set.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for all 5 parameters including enum values. The description adds no parameter-specific semantics, meeting the baseline expectation when the schema carries the full documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific verb (upload) and resource (product knowledge document), clearly indicating the tool creates content. However, it fails to distinguish from sibling tools like `ingest.create` or `ingest.batch` which may perform similar document ingestion functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives (e.g., `ingest.create` for general content), no prerequisites for upload, and no conditions that would prevent usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

products.deleteB
DestructiveIdempotent
Inspect

Delete product knowledge document by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description aligns with annotations (delete implies destructive) and does not contradict them. However, it adds minimal behavioral context beyond the annotations—it fails to mention the idempotent nature of the operation (idempotentHint=true) or what happens when the specified ID does not exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. Every element serves to clarify the operation, making it appropriately front-loaded for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema, complete annotations), the description is minimally sufficient. It covers the essential operation scope, though it could be improved by noting the idempotent behavior or typical error scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'by ID', which maps to the required parameter, but adds no additional semantic information (such as where to obtain the UUID or validation rules) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('Delete'), specific resource ('product knowledge document'), and scope ('by ID'). It effectively distinguishes this tool from siblings like 'products.create' or 'article.delete' through the resource name, though it could further clarify the domain-specific meaning of 'product knowledge document'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives (e.g., archiving), no mention of prerequisites (such as verifying the document exists), and no warnings about the permanent nature of deletion despite the destructiveHint annotation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

products.listB
Read-onlyIdempotent
Inspect

List product knowledge documents.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations fully cover safety profile (readOnly, idempotent, non-destructive). Description adds specificity that these are 'knowledge documents' rather than just products, but omits pagination, result limits, or ordering behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at four words. Front-loaded with verb first. No redundancy or wasted sentences, though minimalism comes at cost of completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for simple read-only operation with strong annotation coverage, but could clarify the relationship between 'product knowledge documents' and the product entities manipulated by other product.* tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters per schema, triggering baseline score of 4. Description correctly implies no filtering parameters are accepted, consistent with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' and resource 'product knowledge documents', but fails to distinguish from sibling 'products.search' which likely offers filtering capabilities this simple list lacks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus products.search or products.create. No mention of prerequisites or expected use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

products.searchB
Read-onlyIdempotent
Inspect

Search product knowledge documents.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results, 1-20
queryYesSemantic search query against product knowledge base
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare this is read-only and safe (destructiveHint=false, readOnlyHint=true), so the description carries a lighter burden. It adds the resource scope ('product knowledge documents') but fails to disclose behavioral traits like the semantic/ranking nature of results, pagination behavior, or what happens when the knowledge base is empty.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured without redundancy, but is arguably too minimal given the tool's semantic complexity and lack of output schema. It wastes no words, yet omits necessary context that should have been included.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Incomplete for a search tool with no output schema. It fails to describe what 'product knowledge documents' contain (features, FAQs, specs?), doesn't clarify the semantic vs keyword distinction revealed in the schema, and provides no guidance on interpreting results or handling the 20-item limit.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Semantic search query...', 'Max results...'), the baseline is 3. The description adds no information about parameter semantics beyond what the schema already documents, but is not expected to given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'Search' and resource 'product knowledge documents', distinguishing it from the sibling 'products.list' which presumably returns product entities rather than documents. However, it misses the opportunity to explicitly clarify this distinction or mention the semantic search nature implied by the schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'products.list' (for browsing) or 'ingest.content.get' (for direct retrieval by ID). There are no prerequisites, conditions, or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

schedule.gapsB
Read-onlyIdempotent
Inspect

List schedule gaps for upcoming days.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of upcoming days to check, 1-30
timezoneNoIANA timezone string, e.g. America/New_York
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already disclose readOnly/idempotent/destructive properties, so the safety profile is covered. The description adds the temporal constraint ('upcoming days'), which is useful context. However, it omits what constitutes a 'gap' (duration thresholds? content slots?), return format, or whether gaps are calculated from existing scheduled items.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with the action verb front-loaded. However, the extreme brevity leaves room for one additional sentence covering return values or usage context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description provides no indication of what data structure or content represents a 'gap' (date ranges? time slots? availability windows?). For a read-only tool without return documentation, the description should compensate by describing the conceptual output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameters are self-documenting. The description mentions 'upcoming days' which loosely references the 'days' parameter, but adds no syntax guidance, examples, or semantic context beyond what the schema already provides. Baseline 3 is appropriate when schema carries the load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('List') and resource ('schedule gaps'), and the 'upcoming days' qualifier helps distinguish it from sibling tools like schedule.list (which likely lists scheduled items rather than gaps) and gaps.list (which appears to be a different gap-related function). However, it could better clarify what 'schedule' refers to in this content-marketing domain (e.g., content calendar gaps vs. meeting availability).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention when to prefer schedule.list over schedule.gaps, nor does it indicate prerequisites like requiring a timezone parameter for accurate gap detection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

schedule.listB
Read-onlyIdempotent
Inspect

List upcoming article/post/social schedule timeline.

ParametersJSON Schema
NameRequiredDescriptionDefault
toNoEnd date filter (ISO 8601)
fromNoStart date filter (ISO 8601)
typeNoFilter by schedule entry type
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, non-destructive). Description adds 'upcoming' behavioral constraint implying future-date filtering, but lacks details on pagination, time window semantics, or return structure. With good annotations, this is acceptable baseline.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 7 words, action-fronted. However, the slash-separated 'article/post/social' is slightly informal and 'schedule timeline' is slightly redundant. Generally efficient with no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema description and explicit differentiation from schedule.gaps. However, with comprehensive input schema (100% coverage), clear annotations, and optional parameters only, it meets minimum viability for a read-only listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all three parameters. Description confirms the 'type' enum values (article/post/social) but adds no syntax details, format examples, or semantic relationships between date parameters beyond schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'List' and identifies resource as 'schedule timeline' scoped to 'article/post/social'. Distinguishes from article.list by mentioning schedule, though could better differentiate from sibling schedule.gaps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like schedule.gaps (which likely identifies scheduling conflicts) or article.list. No prerequisites or explicit use cases mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.redditBInspect

Run Reddit scout analysis. Returns processing status — poll scout.reddit.result for results.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitYesMax results to return, 1-50
queryYesSearch query for Reddit scout
subredditsYesSubreddit names to search within
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate openWorldHint=true and readOnlyHint=false; the description adds valuable context that this initiates a background job returning processing status rather than immediate results. It fails to disclose what kind of analysis is performed (sentiment, keyword trends, engagement metrics) or timeout/rate-limit implications of the external Reddit API.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero redundancy. The second sentence containing the polling instruction is high-value and appropriately placed. The first sentence 'Run Reddit scout analysis' is front-loaded but wastes space on vague verbiage rather than defining the analytical scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the async workflow adequately but leaves significant gaps given the tool's complexity. With no output schema provided, it should explain what 'analysis' entails and what data structure scout.reddit.result will eventually return. It mentions the processing status return but not error states or job expiration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents the query string, subreddit array, and limit integer. The description adds no additional semantic value—no examples of query syntax, whether subreddits should include 'r/' prefix, or guidance on limit selection for different analysis depths. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool performs 'Reddit scout analysis' which is vague—'scout' is jargon that restates the tool name without clarifying whether this is searching, monitoring, sentiment analysis, or data extraction. While it identifies Reddit as the target resource, it fails to specify the analytical output or value produced.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides critical procedural guidance that this is an asynchronous operation requiring polling of scout.reddit.result for results. However, it lacks differentiation from sibling scout tools (scout.x, competitors.scout) and doesn't explain when Reddit scouting is preferable to other intelligence-gathering tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.reddit.resultA
Read-onlyIdempotent
Inspect

Get Reddit scout run status and results by run ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
runIdYesReddit scout run UUID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true, covering safety profile. The description adds that it retrieves 'status and results' but omits behavioral details like what happens if the run is pending, rate limits, or whether results are partial/incremental.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundant words. Subject ('Get'), object ('Reddit scout run status and results'), and qualifier ('by run ID') are efficiently positioned without filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple retrieval tool with 1 well-documented parameter and full annotation coverage. Mentioning 'status and results' provides expected output context despite lack of output schema, though explicit relationship to 'scout.reddit' would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the runId parameter fully documented as 'Reddit scout run UUID'. The description mentions 'by run ID' which aligns with but does not significantly expand upon the schema definition, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'Reddit scout run status and results'. The phrase 'by run ID' effectively distinguishes this from sibling 'scout.reddit' (likely initiation) by implying this retrieves existing runs rather than creating them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While 'by run ID' implies this follows a run creation step, the description lacks explicit workflow guidance such as 'Use this after initiating a scout with scout.reddit' or guidance on polling vs final results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.xAInspect

Run X/Twitter scout analysis. Returns processing status — poll scout.x.result for results.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesScout mode: fast or ultimatefast
limitYesMax results to return, 1-50
queryYesSearch query for X/Twitter scout
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly=false, openWorld=true (external service), and idempotent=false. Description adds critical behavioral context not in annotations: the async pattern (returns status, requires polling) and the explicit relationship to scout.x.result. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence states purpose; second states return type and critical polling instruction. Front-loaded with action, no filler words, appropriate length for async initiation tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers async nature (no output schema exists, so 'Returns processing status' is essential). Explains the scout.x.result relationship. Could optionally clarify what 'scout analysis' entails (search/monitoring) but schema handles query specifics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with documented query, mode (fast/ultimate), and limit parameters. Description adds no parameter-specific semantics, but baseline 3 is appropriate given complete schema documentation. No parameter details are mentioned in description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific purpose: 'Run X/Twitter scout analysis' uses specific verb (Run) and resource (X/Twitter scout analysis). Effectively distinguishes from sibling scout.reddit (different platform) and scout.x.result (fetch results vs initiate analysis).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Returns processing status — poll scout.x.result for results' clearly indicates this initiates an async job and names the specific sibling tool to call for completion. Lacks explicit guidance on choosing between scout.x vs scout.reddit (platform selection).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scout.x.resultB
Read-onlyIdempotent
Inspect

Get X scout run status and results by run ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
runIdYesX scout run UUID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, confirming safe read behavior. The description adds useful context that both status and results are retrieved (not just results), but omits behavioral details like error handling for invalid UUIDs or result availability timing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb and resource. Every word serves a purpose; no redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter retrieval tool. Mentions both status and results, providing hint about return structure despite lack of output schema. Could strengthen by noting relationship to scout.x initiation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with runId described as 'X scout run UUID'. The description mentions 'by run ID' which aligns with the schema but adds no additional semantic depth regarding the parameter's origin (e.g., returned by scout.x) or format constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Get) and resource (X scout run status and results) with clear scope (by run ID). The 'X' distinguishes from scout.reddit.result sibling, though it could explicitly clarify this retrieves results of existing runs versus initiating new ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use versus alternatives. It does not indicate this should be called after scout.x to poll for results, nor does it contrast with scout.reddit.result beyond the implicit platform reference in the name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.headings.checkB
Read-onlyIdempotent
Inspect

Analyze heading hierarchy (H1-H6) for a page.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget page URL to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Specifies the H1-H6 scope which adds context beyond annotations. However, fails to disclose what the analysis entails (e.g., hierarchy validation, skipped levels, multiple H1 detection) despite having no output schema to provide this detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at 7 words. Front-loaded with action verb and specific scope. No redundant or wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool, the description adequately captures intent but misses opportunity to specify analysis criteria or return structure given the absence of an output schema. Sufficient but minimal.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the URL parameter. Description implies the 'page' via the URL parameter but adds no semantic detail beyond the schema's 'Target page URL to analyze'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Analyze' and specific resource 'heading hierarchy (H1-H6)' distinguishes this from sibling SEO tools like seo.meta_tags.check and seo.links.analyze. However, lacks explicit differentiation statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus other SEO analysis tools (e.g., when to check headings vs meta tags) or prerequisites like needing a valid public URL.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.internal_links.planB
Read-onlyIdempotent
Inspect

Plan internal linking opportunities from source URL and target URLs/sitemap.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesSource page URL for internal linking analysis
sitemapUrlNoOptional sitemap URL to discover additional pages
targetUrlsYesArray of target URLs to find linking opportunities
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare the operation is read-only and idempotent, the description adds no behavioral context about what constitutes a 'linking opportunity,' what data is fetched from external URLs (relevant given openWorldHint=true), or what the returned plan contains.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. It front-loads the action ('Plan') and succinctly describes the inputs, making it easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description should explain what the 'plan' contains (e.g., anchor text suggestions, target pages, priority scores). It fails to describe the return value or behavioral side effects beyond the parameter inputs, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by clarifying the directional relationship between parameters: the 'url' is the source and 'targetUrls/sitemapUrl' are destinations for the linking analysis, which aids in correct parameter mapping.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'plans internal linking opportunities' using a specific source and targets. While it identifies the resource and operation, it does not explicitly differentiate from the sibling tool 'seo.links.analyze', which likely analyzes existing links rather than planning new ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'seo.links.analyze', nor does it mention prerequisites such as needing a valid sitemap or whether to provide targetUrls or sitemapUrl preferentially.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.links.analyzeB
Read-onlyIdempotent
Inspect

Analyze internal/external links and link attributes.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget page URL to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While the description doesn't contradict the helpful annotations (readOnly, idempotent, openWorld), it adds minimal behavioral context beyond them. It doesn't clarify what the 'analysis' entails (e.g., broken link detection, anchor text extraction, attribute parsing) or network-fetching behavior implied by openWorldHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely compact at nine words with no filler. It leads with the action verb and immediately specifies scope. However, its brevity borders on under-specification, leaving no room to elaborate on analysis scope or output format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description fails to specify what the analysis returns (a link inventory, broken link report, attribute summary, etc.). For a tool with network behavior (openWorldHint) and single-input complexity, this omission leaves significant gaps in the agent's ability to predict results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description aligns with the schema's 'Target page URL to analyze' but adds no additional semantic value regarding the URL parameter's requirements or format beyond what the schema already provides (uri format, length constraints).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'analyzes internal/external links and link attributes,' specifying a concrete action (analyze) and scope (both link types plus attributes). However, it misses the opportunity to distinguish from sibling `seo.internal_links.plan` (which generates link strategies) versus this tool which audits existing links on a specific URL.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to prefer this over `seo.internal_links.plan` or other SEO audit tools, nor are prerequisites mentioned (e.g., requiring a public URL or specific page state). The agent must infer appropriate usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.meta_tags.checkB
Read-onlyIdempotent
Inspect

Analyze meta tags for a target page URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget page URL to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true, idempotentHint=true, and openWorldHint=true (covering safety and retryability), the description adds minimal behavioral context. It does not disclose what specific checks are performed (e.g., title length, description presence, viewport configuration), error handling for unreachable URLs, or the nature of the analysis output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence 'Analyze meta tags for a target page URL.' is appropriately brief and front-loaded with the action verb. There is no redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (1 parameter), 100% schema coverage, and presence of detailed annotations, the description is minimally adequate. However, with no output schema provided, the description should ideally characterize what the analysis returns (validation results vs. extraction), which it fails to do.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the 'url' parameter is fully documented in the schema itself ('Target page URL to analyze'). The description does not add syntax details, format examples, or semantic constraints beyond what the schema provides, warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The verb 'Analyze' and resource 'meta tags' are specific, and specifying 'target page URL' clarifies the scope. It distinguishes from siblings like 'seo.headings.check' or 'seo.links.analyze' by naming the specific SEO element, though it doesn't clarify the difference versus 'seo.schema.validate' or 'seo.og.preview' which overlap conceptually.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the numerous sibling SEO tools (e.g., 'seo.schema.validate', 'seo.og.preview', 'seo.headings.check'). There are no prerequisites, exclusion criteria, or workflow positioning hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.og_image.generateBInspect

Generate an Open Graph image artifact and return hosted URL for brand assets.

ParametersJSON Schema
NameRequiredDescriptionDefault
logoUrlNoURL to brand logo image
headlineNoMain headline text
templateNoOG image template style
brandNameNoBrand or company name
subheadlineNoSecondary headline text
primaryColorNoBrand primary color in hex format #RRGGBB
brandDescriptionNoShort brand description
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate non-read-only operation (readOnlyHint: false) and non-idempotent (idempotentHint: false). Description adds valuable output context ('return hosted URL') but fails to disclose that multiple calls create multiple hosted images (non-idempotent behavior) or persistence details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence (11 words) front-loaded with action verbs. Each phrase serves distinct purpose: action (Generate), object (OG image artifact), output (hosted URL), context (brand assets). Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers core generation purpose and return value. However, with 7 parameters and 0 required fields (unusual), description should flag that all attributes are optional or provide minimum viable input guidance. Also omits output format details despite no output schema existing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear parameter definitions. Description adds contextual framing 'for brand assets' linking the parameter group thematically, but adds no syntax guidance beyond schema. Baseline 3 appropriate given complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Generate' with specific resource 'Open Graph image artifact' and output 'hosted URL'. Missing explicit differentiation from sibling tool 'seo.og.preview' which likely validates existing tags rather than generating new images.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context 'for brand assets' but lacks explicit guidance on when to invoke vs alternatives (particularly 'seo.og.preview'). No mention of prerequisites like required brand assets or when generation is preferred over using existing images.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.og.previewA
Read-onlyIdempotent
Inspect

Extract Open Graph and Twitter card preview metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget page URL to analyze
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly and openWorld (external URL fetching). Description adds valuable domain specificity that Open Graph and Twitter Card protocols are being targeted, not just generic metadata extraction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with action verb and specific resource type. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple single-parameter read-only tool. While output schema is absent, description adequately indicates the tool returns metadata. Safety profile is covered by annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'url' parameter fully documented as 'Target page URL to analyze'. Description offers no additional parameter details, which is acceptable given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Extract' with clear resource 'Open Graph and Twitter card preview metadata', distinguishing it from sibling tool 'seo.og_image.generate' (which generates images) and 'seo.meta_tags.check' (general meta tags).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternative guidance provided. Usage is implied by the description (use when you need social media metadata), but lacks explicit prerequisites or comparisons to similar SEO inspection tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.robots.checkB
Read-onlyIdempotent
Inspect

Check robots.txt availability and parsed directives.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget page URL to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true, establishing it safely fetches external resources. The description adds context that it checks 'availability' (implying HTTP status validation) and 'parsed directives' (implying content parsing), but lacks detail on error handling, rate limits, or what specific directives are extracted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with no filler. Front-loaded with the action verb 'Check', immediately followed by the specific target and scope. Zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 1-parameter read-only tool with complete schema annotations. However, lacking an output schema, the description could better hint at what 'parsed directives' entails (e.g., user-agent rules, disallow patterns) to set expectations for the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'url' parameter, the schema adequately documents inputs. The description adds no parameter-specific guidance beyond what's in the schema, warranting the baseline score of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Check) and resource (robots.txt), specifying the scope covers availability and parsed directives. It implicitly distinguishes from sibling SEO tools like seo.sitemap.check and seo.headings.check by naming the specific target file (robots.txt).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus other SEO audit tools (e.g., seo.sitemap.check for sitemaps or seo.links.analyze for link analysis). No mention of prerequisites or scenarios where robots.txt checking is particularly valuable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.schema.validateA
Read-onlyIdempotent
Inspect

Validate JSON-LD schema markup for a page.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget page URL to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnlyHint, destructiveHint, idempotentHint) and openWorldHint, so description burden is lower. Description adds minimal behavioral context beyond the functional purpose—does not disclose validation strictness, error format, or that it requires fetching external URLs (though implied by 'url' parameter). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at 9 words. Front-loaded with active verb. No redundancy with tool name or structured metadata. Every word serves distinct purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter read-only validation tool with strong annotations. Lacks description of return value structure or validation criteria, but given no output schema exists and complexity is low, description is sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (url parameter fully described as 'Target page URL to analyze'). Per rubric, high schema coverage establishes baseline 3. Description adds no parameter syntax, format examples, or constraints beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Validate' (verb) + 'JSON-LD schema markup' (resource/format) + 'for a page' (scope). Clearly distinguishes from sibling SEO tools like seo.meta_tags.check or seo.headings.check by specifying JSON-LD/schema validation specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use versus sibling tools (seo.meta_tags.check, seo.headings.check, etc.) or prerequisites. Does not indicate whether this validates against Schema.org, Google Rich Results, or general JSON-LD syntax. No alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

seo.sitemap.checkB
Read-onlyIdempotent
Inspect

Check sitemap availability and robots sitemap hints.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget page URL to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is read-only and safe (readOnlyHint=true, destructiveHint=false). The description adds value by specifying what exactly is checked—sitemap availability and robots.txt sitemap hints—but omits details about return format, error behavior, or what constitutes a 'hint'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at 7 words. It is front-loaded with the action verb. The term 'hints' is slightly vague (could be 'directives' or 'references'), preventing a 5, but the density of information is appropriate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read-only tool with good annotations, the description is minimally adequate. However, lacking an output schema, it should ideally hint at what information is returned (e.g., discovered sitemap URLs, status codes) rather than just stating what is checked.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'url' parameter is fully documented as 'Target page URL to analyze'), the baseline is 3. The description does not add parameter-specific semantics, but given the complete schema, it does not need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Check' and clearly identifies the resources: 'sitemap availability' and 'robots sitemap hints' (referring to Sitemap directives in robots.txt). It implicitly distinguishes from the sibling `seo.robots.check` by focusing specifically on sitemap-related checks rather than general robots.txt rules.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like `seo.robots.check` or `seo.headings.check`. It does not mention prerequisites (like needing a valid URL) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

session.createCInspect

Create and start an autopilot session.

ParametersJSON Schema
NameRequiredDescriptionDefault
problemsNoPain points or problems to address in content
languagesNoLanguage codes for content generation
categoriesYesContent categories for the autopilot session
article_sizeNoSize preset for generated articlesmini
interval_minutesNoMinutes between autopilot article generation (60-10080)
disable_competitionNoSkip competitor analysis during generation
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations correctly identify this as a non-destructive write operation (readOnlyHint: false, destructiveHint: false). The description adds that it 'starts' the session, implying immediate activation. However, it fails to disclose that idempotentHint: false means duplicate calls create duplicate sessions, or explain the persistent/recurring nature of the created resource.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero fluff and front-loaded structure. However, it is arguably undersized for the tool's complexity (6 params creating a persistent background process), though conciseness specifically penalizes waste, not brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a tool that instantiates a recurring background process. No output schema is present, yet the description doesn't explain what the tool returns (presumably a session ID) or how to manage the session later (notably absent: no `session.delete` or `session.list` in siblings). Should clarify the automation lifecycle and side effects (generates articles repeatedly).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description adds no parameter context, but compensates for nothing given the comprehensive schema. Parameters like `interval_minutes` and `article_size` clearly define the behavior without needing description redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it creates an 'autopilot session' but 'autopilot' is vague jargon that doesn't clearly distinguish from sibling `article.generate` (one-off) vs this recurring scheduler. Uses verb+resource but lacks specificity on what the session actually does (generates articles on a schedule).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this recurring automation vs the one-off `article.generate` or scheduled tasks in `schedule.gaps`. No mention of prerequisites or lifecycle expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

settings.getA
Read-onlyIdempotent
Inspect

Get current agent settings.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the safety profile. The description aligns with these annotations but adds minimal behavioral context beyond the scoping word 'current'. It does not describe the return structure or what specific settings categories are included.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise four-word description with no redundant information. Every word earns its place by conveying the action and target resource without repetition of metadata already available in the tool name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for the low complexity of this operation (read-only, no inputs, single purpose). While an output schema is absent, the description sufficiently communicates the tool's function given the rich annotations covering behavioral safety.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool accepts zero parameters (empty object schema per input schema). With no parameters requiring semantic explanation, this meets the baseline expectation for descriptions of parameter-free tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb ('Get') and resource ('current agent settings'). While brief, it successfully identifies the operation and scope, distinguishing itself from sibling tools which focus on articles, SEO, products, and other domains rather than agent configuration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to invoke this tool versus alternatives, nor any prerequisites. No mention of whether settings should be cached, when they change, or relationship to other configuration tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

shorts.avatarBInspect

Generate an AI avatar image for shorts and return hosted avatar URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoAvatar character archetype
genderYesAvatar gender
originYesAvatar ethnicity/appearance
locationNoBackground setting for the avatar
age_rangeNoAvatar age range
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate mutation (readOnlyHint: false). Description adds valuable return value info ('hosted avatar URL') but omits idempotency implications (idempotentHint: false) and persistence details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 11 words, front-loaded with action verb. No redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for the tool's scope given rich schema annotations, but lacks workflow context explaining how the avatar integrates with shorts creation pipeline.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions. Tool description doesn't repeat parameter details, meeting baseline expectations for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Generate' and resource 'AI avatar image', with 'for shorts' distinguishing it from general avatar generators and sibling tools like shorts.generate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides domain context ('for shorts') but lacks explicit when-to-use guidance, prerequisites, or comparison to alternatives like shorts.generate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

shorts.generateAInspect

Generate shorts video and return poll instructions until final video URL is ready.

ParametersJSON Schema
NameRequiredDescriptionDefault
promptYesVideo generation prompt describing scene, style, and action
durationYesVideo duration in seconds: 5, 10, or 15
avatar_urlYesURL of AI avatar image hosted on download.citedy.com
resolutionNoVideo resolution
speech_textNoText the avatar will speak with lip-sync
aspect_ratioNoVideo aspect ratio
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate it's a write operation (readOnlyHint: false) but the description adds critical context that the tool returns polling instructions rather than the final URL immediately, indicating asynchronous behavior essential for correct invocation not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single sentence of 12 words. Every clause delivers essential information about the generation action and the async polling return pattern. No redundancy or verbose filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers the critical async return behavior but lacks operational context such as the relationship to shorts.avatar (prerequisite), error handling during polling, or integration with shorts.get/shorts.publish workflows. Adequate for basic usage given good schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all 6 parameters including constraints (e.g., avatar_url must be hosted on download.citedy.com). The description adds no parameter-specific details but doesn't need to given the complete schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool generates shorts videos and specifies the async polling behavior ('return poll instructions until final video URL is ready'). Distinguishes from siblings by domain (shorts vs articles/adaptations) but doesn't explicitly differentiate from shorts.script or shorts.merge.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus siblings like shorts.script (script generation) or shorts.avatar (avatar creation). No mention of prerequisites, such as requiring an avatar created via shorts.avatar first before using this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

shorts.getB
Read-onlyIdempotent
Inspect

Get shorts generation status/result by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, and non-destructive traits. The description adds that this returns 'status/result,' implying state checking behavior, but omits polling recommendations, possible status values, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact single sentence with front-loaded verb. Zero redundancy, though brevity trades off against providing workflow context that would be valuable for an async status-checking tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 1-parameter read operation with safety annotations present. However, given this likely supports an async generation workflow (shorts.generate → shorts.get), the description lacks expected return structure or state machine details that would aid invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage (the 'id' parameter has complete documentation), the baseline is 3. The description adds minimal semantic value beyond the schema, merely noting 'by ID' which mirrors the schema structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb ('Get') and resource ('shorts generation status/result'), distinguishing it from sibling mutation tools like shorts.generate and shorts.publish. However, it could better clarify that this retrieves asynchronous job results versus other potential 'get' patterns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains no guidance on when to invoke this tool (e.g., polling after shorts.generate) or when to avoid it. No mention of the async workflow or alternatives for retrieving shorts data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

shorts.mergeAInspect

Merge 2-4 short clips, apply subtitle phrases, and return final video URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
configNoOptional subtitle styling configuration
phrasesYesSubtitle text for each video segment
video_urlsYesArray of 2-4 video URLs to merge
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate write operation (readOnlyHint: false) and non-destructive (destructiveHint: false). The description adds valuable behavioral context: it creates a composite video, applies subtitle processing, and returns a new URL—clarifying the output format without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient 12-word sentence. Front-loaded with action verb 'Merge', immediately followed by constraints (2-4 clips), processing details (apply subtitle phrases), and return value (final video URL). Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (video processing with nested styling config) and lack of output schema, the description adequately compensates by stating the return value (final video URL). The 100% schema coverage ensures parameters are self-documenting, making this complete enough for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds conceptual meaning by reinforcing the '2-4' constraint and explaining the relationship between video_urls and phrases (applying subtitles to merged segments), which helps the agent understand parameter interaction beyond isolated definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs (Merge, apply, return) and identifies the resource (short clips). It clearly distinguishes from siblings like shorts.generate (creates new) and shorts.get (retrieves existing) by specifying the merge operation on existing clips.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through input constraints ('2-4 short clips'), suggesting use when combining existing videos. However, it lacks explicit when-not guidance or contrast with alternatives like shorts.generate, leaving the agent to infer the appropriate workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

shorts.publishAInspect

Publish a generated short video directly to YouTube Shorts and/or Instagram Reels. Does not require an article — derives title/description/hashtags from speech_text via LLM. Instagram Reels costs 5 credits; YouTube Shorts is free. Returns per-platform results and timings.

ParametersJSON Schema
NameRequiredDescriptionDefault
targetsYes1-2 publish targets; each platform may appear at most once
video_urlYesHTTPS URL of the generated video (must be on download.citedy.com or Supabase storage)
speech_textYesSpoken text / voiceover used for the video — LLM derives title, description, and hashtags from it
privacy_statusNoYouTube privacy status (default: public)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses LLM derivation of metadata from speech_text, credit costs, and return value structure ('per-platform results and timings'). Missing rate limits, authentication requirements, and partial failure behavior for external API calls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Front-loaded with core action (publish), followed by content prerequisites (no article needed, LLM derivation), then operational costs and return values. Every clause provides distinct, non-redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers a moderately complex tool (external APIs, LLM processing, costing) despite no output schema, by summarizing return values. Schema is fully documented. Minor gap: lacks idempotency, retry behavior, or error handling details for multi-platform publishing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds crucial semantic context: speech_text is used for LLM-derived metadata (not just 'spoken text'), video_url source constraints (implies download.cited.cim/Supabase requirement), and cost implications for targets parameter. Adds meaningful value beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Publish' with clear resource 'generated short video' and explicit platforms 'YouTube Shorts and/or Instagram Reels'. Clearly distinguishes from sibling 'shorts.generate' (creation) and 'social.publish' (general social posts vs Shorts/Reels specifically).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit cost guidance ('Instagram Reels costs 5 credits; YouTube Shorts is free') and prerequisite context ('Does not require an article'). Implicitly distinguishes from article-based workflows but lacks explicit comparison to 'social.publish' for when to choose one over the other.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

shorts.scriptBInspect

Generate short-form video script text (hook/educational/cta styles).

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoScript style: hook, educational, or cta
topicYesTopic or subject for the video script
durationNoScript length: short or long
languageNoLanguage code, e.g. en, es, de
product_idNoOptional product ID to reference in script
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=false and destructiveHint=false, establishing it as a safe write operation. The description adds minimal behavioral context beyond this—it doesn't disclose that the operation is non-idempotent (per annotations), what format the script returns in, or whether it persists to storage versus returning ephemeral text.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of seven words that immediately signals the tool's function. While efficient, extreme brevity contributes to the lack of behavioral and contextual detail expected for a multi-parameter generative tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite five parameters including UUID references and language codes, and with no output schema provided, the description omits what the tool returns (text? ID?), how it relates to `shorts.generate` in the pipeline, and expected content length or format constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters (style, topic, duration, language, product_id) fully documented. The parenthetical style list in the description merely echoes the schema enum without adding syntax guidance or usage rationale, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Generate) and resource (short-form video script text) plus available styles (hook/educational/cta). However, it fails to explicitly differentiate from sibling tool `shorts.generate`, which likely produces the final video asset rather than text content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus `shorts.generate` or other content tools, nor does it mention prerequisites (e.g., whether `product_id` requires an existing product) or where the output fits in the video production workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

social.publishCInspect

Publish, schedule, cancel, or direct-publish (as-is, no AI adaptation) social content.

ParametersJSON Schema
NameRequiredDescriptionDefault
actionYesPublish action: now, schedule, or cancel
platformYesTarget social media platform
accountIdYesUUID of the connected social account
scheduledAtNoISO 8601 datetime for scheduled publishing
adaptationIdYesUUID of the social adaptation to publish
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The 'no AI adaptation' note adds valuable behavioral context beyond the annotations (which only indicate mutability). However, the description does not clarify what 'cancel' does (removes from queue vs. deletes content), whether failed publishes retry automatically, or that this operates on already-adapted content (referencing adaptationId).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While compact as a single sentence, the structure is awkward with the four-item list ('Publish, schedule, cancel, or direct-publish') and unclear parenthetical scope. The redundancy between 'Publish' and 'direct-publish' wastes cognitive load without adding clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 5-parameter multi-action tool affecting external platforms (openWorldHint=true), the description adequately covers the action types but leaves gaps in explaining the workflow (e.g., that adaptationId comes from adapt.generate), the scheduling mechanics, or error scenarios. No output schema exists, but the description appropriately focuses on inputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description maps loosely to the action enum ('cancel' is explicit, 'direct-publish' likely maps to 'now') but does not add semantic details about the ISO 8601 format requirements, UUID patterns, or the relationship between adaptationId and the content being published.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description lists actions 'Publish, schedule, cancel, or direct-publish' which creates ambiguity with the enum values (now, schedule, cancel). It is unclear if 'Publish' and 'direct-publish' are distinct actions or if 'direct-publish' explains 'Publish', creating confusion for a multi-action tool where precise action selection is critical.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the parenthetical '(as-is, no AI adaptation)' hints at workflow context (distinguishing from tools that might adapt content), there is no explicit guidance on when to use 'now' vs 'schedule', when 'cancel' is appropriate, or how this differs from sibling publishing tools like article.publish or shorts.publish.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

webhooks.deleteC
DestructiveIdempotent
Inspect

Delete a webhook endpoint by ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesUnique identifier (UUID)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare the operation as destructive and idempotent, the description adds no context about side effects (e.g., what happens to pending deliveries), reversibility, or authentication requirements beyond the structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficient and front-loaded with the action verb, containing no redundant words. However, for a destructive operation, this brevity leaves critical behavioral context unexplained.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the destructive nature and presence of related sibling tools (webhooks.deliveries), the description should clarify impact on associated resources or delivery status. It adequately identifies the target resource but omits operational side effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the 'id' parameter well-documented as a UUID. The description mentions 'by ID' but adds no semantic clarification (e.g., 'obtained from webhooks.list') or format guidance beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb (Delete), resource (webhook endpoint), and identifier scope (by ID). However, it does not explicitly differentiate from sibling tools like webhooks.register or webhooks.list within the text itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives, prerequisites such as listing webhooks first to obtain an ID, or warnings about the permanent nature of the operation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

webhooks.deliveriesB
Read-onlyIdempotent
Inspect

List webhook delivery attempts.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return, 1-100
offsetNoPagination offset
statusNoFilter by delivery status
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations cover safety (readOnly, non-destructive), the description adds minimal behavioral context. It does not explain that results are paginated (despite offset/limit params), what 'dead_lettered' means, data retention policies, or default sorting (likely by time).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four words, zero redundancy, verb-first construction. Every element earns its place by precisely identifying operation and resource. Appropriate for high-schema-coverage tools where structured data handles details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimally adequate given 100% schema coverage, but gaps remain due to missing output schema. The description does not hint at return structure (array of attempt objects with timestamps, payloads, HTTP response codes), which would help agents understand the tool's utility without output schema documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured documentation fully carries the parameter semantics. The description adds no param-specific context, but the schema comprehensively documents limit ranges, offset purpose, and status enum values, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') and clear resource ('webhook delivery attempts') that implicitly distinguishes it from sibling 'webhooks.list' (likely configurations) by specifying 'delivery attempts' (execution history). However, it does not explicitly contrast with siblings or clarify the scope (e.g., time range retained).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'delivery attempts' implies a monitoring/troubleshooting use case for webhook execution history, suggesting when to use it (checking status of sent webhooks). However, it lacks explicit guidance on when to prefer this over 'webhooks.list' or prerequisites like needing a webhook ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

webhooks.listA
Read-onlyIdempotent
Inspect

List webhook endpoints for current agent.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, non-destructive, idempotent). Description adds scoping context ('for current agent') beyond annotations but omits behavioral details like pagination, rate limits, or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 6 words. Every word earns its place: verb, resource, and scope constraint. No redundancy or noise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter read-only list operation with rich annotations. No output schema exists, so description needn't explain return values. Could marginally improve by noting it returns registered webhook configurations, but sufficient for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, establishing baseline 4. Schema coverage is 100% (vacuously true for empty schema). Description appropriately makes no parameter claims.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' + resource 'webhook endpoints' + scope 'for current agent'. Distinguishes from sibling 'webhooks.deliveries' by specifying 'endpoints' vs delivery logs, and from 'webhooks.register/delete' by operation type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context about scope (current agent only) but lacks explicit guidance on when to use versus siblings like 'webhooks.register' (for creation) or 'webhooks.delete' (for removal). Usage is implied by the naming convention but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

webhooks.registerCInspect

Register webhook endpoint for agent events.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesWebhook endpoint URL
descriptionNoHuman-readable webhook description
event_typesNoEvent types to subscribe to
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate non-destructive and non-idempotent behavior (idempotentHint=false is critical here—calling twice creates duplicate registrations), but description fails to disclose what registration entails: whether it returns a webhook ID/secret, if URL verification is synchronous, or whether duplicate URLs are permitted. Missing behavioral context for a stateful mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 6 words with zero redundancy. However, brevity crosses into under-specification for a complex operation like webhook registration; a second sentence covering return values or lifecycle would improve utility without sacrificing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a webhook registration tool with 100% input coverage but no output schema, the description inadequately covers the operational contract. It omits what the tool returns (webhook ID? verification token?), how long the registration persists, and what 'agent events' encompasses (session events? article events?). Insufficient for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for url, description, and event_types. The tool description adds no additional parameter semantics, so baseline 3 is appropriate per scoring rules when schema carries full documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Register) and resource (webhook endpoint) with scope (agent events). Distinguishes from siblings webhooks.delete, webhooks.list, and webhooks.deliveries by indicating this is a creation operation, though 'agent events' lacks specificity about what events are available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like webhooks.list (which shows existing registrations) or webhooks.delete. Does not mention prerequisites such as URL verification requirements or authentication setup typically needed for webhook registration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.