Skip to main content
Glama

Server Details

Access the Notra API for managing posts, brand identities, and integrations.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
usenotra/notra-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 19 of 19 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes targeting specific resources like brand identities, posts, integrations, and schedules, with clear CRUD operations. However, there is some overlap between 'generate_post' and 'generate_brand_identity' as both involve async generation, but their descriptions clarify different domains (GitHub activity vs. website scraping).

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case, such as 'create_github_integration', 'list_brand_identities', and 'update_schedule'. This uniformity makes the tool set predictable and easy to navigate for an agent.

Tool Count4/5

With 19 tools, the count is slightly high but reasonable for a content generation platform covering integrations, schedules, brand identities, and posts. It includes core operations without being overly bloated, though it borders on the upper limit of typical scoping.

Completeness5/5

The tool set provides comprehensive coverage for the content generation domain, including CRUD operations for brand identities, posts, integrations, and schedules, along with async generation and status polling. No obvious gaps are present, enabling full lifecycle management.

Available Tools

19 tools
create_github_integrationCInspect

Connect a GitHub repository as an integration for content generation

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesGitHub repository name
ownerYesGitHub repository owner (user or organization)
tokenNoGitHub personal access token for private repos
branchNoDefault branch (auto-detected if not set)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'connect' for content generation but fails to specify permissions needed, whether this is a one-time setup or ongoing integration, error handling, or what happens on success (e.g., does it return an integration ID?). This leaves critical behavioral traits undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, earning a high score for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a tool that likely creates a persistent integration. It doesn't explain the outcome (e.g., what 'connect' means operationally), error cases, or how it fits into the broader 'content generation' context, leaving significant gaps for an agent to understand its use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear parameter descriptions in the schema (e.g., 'GitHub repository name', 'GitHub personal access token for private repos'). The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Connect') and resource ('GitHub repository as an integration'), specifying it's for content generation. However, it doesn't differentiate from sibling tools like 'list_integrations' or 'delete_integration' beyond the 'create' action implied by the name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'list_integrations' or 'delete_integration', nor are prerequisites or context for 'content generation' explained. The description lacks explicit when/when-not instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_scheduleCInspect

Create a content generation schedule using a cron-style daily, weekly, or monthly trigger

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesSchedule name (1-120 characters)
enabledYesWhether the schedule is active
targetsYesRepositories the schedule should target
outputTypeYesType of content to generate
sourceTypeYesSchedule trigger type
autoPublishNoWhether to auto-publish generated content (default false)
outputConfigNoOptional publishing and voice settings
sourceConfigYesCron trigger configuration
lookbackWindowNoTime window for gathering data before generation (default: last_7_days)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool creates a schedule but doesn't mention required permissions, whether it's idempotent, what happens on conflicts, rate limits, or the response format. For a creation tool with complex parameters, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized for the tool's complexity and gets straight to the point about what the tool does.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with 9 parameters (6 required), no annotations, and no output schema, the description is insufficient. It doesn't address what the tool returns, error conditions, or how it interacts with sibling tools. The 100% schema coverage helps, but the description alone doesn't provide enough context for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing detailed documentation for all 9 parameters. The description adds minimal value beyond the schema, mentioning 'cron-style daily, weekly, or monthly trigger' which aligns with the 'frequency' enum in sourceConfig.cron but doesn't elaborate further on parameter relationships or usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create') and resource ('content generation schedule') with specific trigger types ('cron-style daily, weekly, or monthly'). It doesn't explicitly differentiate from sibling tools like 'create_github_integration' or 'update_schedule', but the purpose is well-defined and not tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'update_schedule' or 'generate_post'. It mentions the trigger types but doesn't specify prerequisites, exclusions, or appropriate contexts for scheduling content generation versus other creation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_brand_identityAInspect

Delete a brand identity. Returns any schedules or events that were disabled as a result.

ParametersJSON Schema
NameRequiredDescriptionDefault
brandIdentityIdYesThe brand identity ID to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds value by mentioning the return of 'schedules or events that were disabled as a result', which hints at side effects. However, it lacks details on permissions needed, error conditions, or confirmation steps, leaving gaps for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences that are front-loaded and waste no words. The first sentence states the core action, and the second adds critical behavioral context about side effects, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's destructive nature and lack of annotations or output schema, the description is minimally complete. It covers the basic action and a key side effect but omits details like error handling, return format specifics, or safety warnings. This is adequate for a simple deletion tool but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'brandIdentityId' documented in the schema. The description does not add any semantic details beyond what the schema provides, such as format examples or validation rules. Baseline score of 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete' and the resource 'brand identity', making the purpose specific and unambiguous. It distinguishes this tool from sibling tools like 'update_brand_identity' or 'get_brand_identity' by focusing on deletion rather than modification or retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as 'update_brand_identity' for modifications or 'list_brand_identities' for viewing. It also lacks prerequisites, warnings about irreversible actions, or context for when deletion is appropriate, leaving usage decisions unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_integrationBInspect

Delete a GitHub or Linear integration. Returns any schedules or events that were disabled as a result.

ParametersJSON Schema
NameRequiredDescriptionDefault
integrationIdYesThe integration ID to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that deletion disables associated schedules or events and returns them, which is valuable behavioral context. However, it lacks details on permissions needed, whether deletion is reversible, error conditions, or rate limits, leaving gaps for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the action and resource, then adds outcome details. Every word earns its place, with no redundancy or fluff, making it highly concise and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity as a destructive operation with no annotations or output schema, the description is moderately complete. It covers the basic action and return behavior but misses critical context like error handling, side effects beyond disabled items, or integration with sibling tools, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'integrationId' fully documented in the schema. The description adds no additional meaning about the parameter, such as format examples or where to find the ID, so it meets the baseline of 3 without compensating beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('GitHub or Linear integration'), making the purpose immediately understandable. It distinguishes from some siblings like 'delete_schedule' or 'delete_post' by specifying the integration type, though it doesn't explicitly contrast with 'list_integrations' or 'create_github_integration'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'list_integrations' for checking existing ones or 'create_github_integration' for setup. The description mentions the outcome but doesn't specify prerequisites, such as needing an existing integration ID, or warn against misuse.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_postCInspect

Delete a post by its ID

ParametersJSON Schema
NameRequiredDescriptionDefault
postIdYesThe post ID to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool deletes a post, implying a destructive mutation, but lacks critical behavioral details: whether deletion is permanent/reversible, required permissions, error handling, or confirmation prompts. This is inadequate for a destructive tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—it directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's destructive nature, lack of annotations, and no output schema, the description is incomplete. It doesn't cover behavioral risks, return values, or error cases, leaving significant gaps for safe agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'postId' documented as 'The post ID to delete'. The description adds no additional meaning beyond this, such as format examples or sourcing guidance. Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and target resource ('a post'), providing specific verb+resource pairing. However, it doesn't differentiate from sibling tools like 'delete_brand_identity' or 'delete_schedule' beyond the resource type, missing explicit sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing post), exclusions, or comparisons to siblings like 'list_posts' or 'update_post' for context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_scheduleCInspect

Delete a content generation schedule by its ID

ParametersJSON Schema
NameRequiredDescriptionDefault
scheduleIdYesThe schedule ID to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a destructive operation ('Delete'), but doesn't specify whether deletion is permanent, requires specific permissions, has side effects, or what happens on success/failure. This leaves significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to understand at a glance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive operation with no annotations and no output schema, the description is insufficient. It doesn't explain what 'delete' entails (e.g., permanence, confirmation), error conditions, or return values. Given the complexity and lack of structured data, more behavioral context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'scheduleId' clearly documented in the schema. The description adds minimal value beyond the schema by mentioning 'by its ID', which is already implied. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and target resource ('a content generation schedule by its ID'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'delete_brand_identity' or 'delete_post' beyond specifying the resource type, which is why it doesn't reach a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'delete_brand_identity' or 'delete_post', nor does it mention prerequisites such as needing an existing schedule ID. It simply states what the tool does without contextual usage information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_brand_identityAInspect

Queue async brand identity generation from a website URL. Notra will scrape the site and extract brand info. Use get_brand_identity_generation_status to poll for completion.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoName for the brand identity (1-120 characters)
websiteUrlYesWebsite URL to analyze for brand identity extraction
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: the operation is async (queued), involves web scraping by Notra, and requires polling a separate tool for completion. However, it doesn't mention potential side effects (e.g., data storage), error handling, rate limits, or authentication needs, which are gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: two sentences that front-load the core action ('Queue async brand identity generation') and follow with essential context (scraping, polling). Every sentence earns its place by providing critical information without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (async operation with scraping), no annotations, and no output schema, the description is moderately complete. It covers the async nature and polling workflow but lacks details on return values, error cases, or what 'brand info' entails. For a tool with no structured safety or output info, more behavioral context would be needed for higher completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('name' and 'websiteUrl') with their types and constraints. The description adds no additional parameter semantics beyond implying 'websiteUrl' is the primary input for scraping. This meets the baseline of 3 when schema coverage is high, but doesn't add extra value like format examples or usage tips.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Queue async brand identity generation from a website URL.' It specifies the verb ('Queue async generation'), resource ('brand identity'), and source ('website URL'). However, it doesn't explicitly differentiate from siblings like 'generate_post' or 'get_brand_identity', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for initiating brand identity generation from a website URL. It also explicitly mentions an alternative tool: 'Use get_brand_identity_generation_status to poll for completion.' This gives good guidance on workflow, though it doesn't specify when NOT to use it or compare to other siblings like 'create' or 'update' tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_postAInspect

Queue an async post generation job. Notra will analyze your GitHub activity and generate content. Use get_post_generation_status to poll for completion.

ParametersJSON Schema
NameRequiredDescriptionDefault
githubNoGitHub repositories to analyze
dataPointsNoTypes of data to include in generation
contentTypeYesType of content to generate
brandVoiceIdNoBrand voice ID to use for generation
integrationsNoIntegration IDs to use for generation
repositoryIdsNoRepository IDs to include. Deprecated; prefer integrations.github.
selectedItemsNoSpecific items to include in generation
lookbackWindowNoTime window for gathering data (default: last_7_days)
brandIdentityIdNoBrand identity ID to use
linearIntegrationIdsNoLinear integration IDs to include. Deprecated; prefer integrations.linear.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the operation is async ('queue an async post generation job'), requires polling for completion, and involves analyzing GitHub activity. However, it lacks details on permissions needed, rate limits, error handling, or what the generated content looks like. For a complex tool with 10 parameters and no annotations, this is a moderate but insufficient disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: two sentences that directly state the tool's purpose and usage. Every word earns its place, with no redundancy or fluff. It efficiently communicates the core functionality and workflow in minimal text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, nested objects, no output schema, no annotations), the description is incomplete. It covers the async nature and basic purpose but lacks details on output format, error conditions, authentication needs, or how parameters like 'brandVoiceId' affect generation. For a tool with rich schema but no other structured context, the description should do more to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no parameter-specific information beyond implying GitHub activity analysis. It doesn't explain parameter relationships (e.g., how 'github' interacts with 'integrations') or usage nuances. Given high schema coverage, the baseline is 3, as the description doesn't add meaningful semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Queue an async post generation job. Notra will analyze your GitHub activity and generate content.' It specifies the verb ('queue'), resource ('post generation job'), and scope ('analyze GitHub activity'). However, it doesn't explicitly differentiate from sibling tools like 'generate_brand_identity' or 'update_post', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: it's for generating content from GitHub activity, and it explicitly mentions 'Use get_post_generation_status to poll for completion,' which is a sibling tool. This gives good guidance on the async workflow. However, it doesn't specify when NOT to use this tool (e.g., vs. 'update_post' for editing existing posts) or mention alternatives, preventing a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_brand_identityBInspect

Get a single brand identity by its ID, including tone, audience, and language settings

ParametersJSON Schema
NameRequiredDescriptionDefault
brandIdentityIdYesThe brand identity ID to retrieve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Get') but doesn't mention authentication requirements, rate limits, error conditions, or what happens if the ID doesn't exist. For a retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Get a single brand identity by its ID') and adds useful detail about included settings. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and high schema coverage, the description is adequate but minimal. However, with no output schema and no annotations, it should ideally provide more context about return values or behavioral constraints to be fully complete for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'brandIdentityId' fully documented in the schema. The description adds no additional parameter details beyond implying retrieval by ID, which the schema already covers. This meets the baseline of 3 when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('a single brand identity by its ID'), and specifies what data is included ('tone, audience, and language settings'). However, it doesn't explicitly distinguish this from the sibling tool 'list_brand_identities' beyond the singular vs. plural naming, which is why it doesn't achieve a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'list_brand_identities' or 'generate_brand_identity'. It mentions retrieving by ID but doesn't specify prerequisites (e.g., needing a valid ID) or exclusions, leaving the agent to infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_brand_identity_generation_statusAInspect

Check the status of an async brand identity generation job

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe generation job ID to check
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the basic action without behavioral details. It lacks information on permissions needed, rate limits, error conditions (e.g., invalid jobId), or what the status response includes (e.g., pending, completed, failed). This is a significant gap for a tool that likely returns structured status data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Check the status') without unnecessary words. Every part of the sentence contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It does not explain what the tool returns (e.g., status values, timestamps, error messages), which is critical for an agent to interpret results. For a status-checking tool with one parameter, this leaves significant gaps in usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'jobId' documented as 'The generation job ID to check'. The description does not add meaning beyond this, such as format examples or source of the jobId. Baseline 3 is appropriate as the schema adequately covers the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check the status') and target resource ('async brand identity generation job'), distinguishing it from siblings like 'get_brand_identity' (which retrieves the identity itself) and 'generate_brand_identity' (which initiates generation). It uses precise terminology that matches the tool's name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'async brand identity generation job', suggesting it should be used after initiating such a job (e.g., via 'generate_brand_identity'). However, it does not explicitly name alternatives or state when not to use it, such as for checking post generation status (handled by sibling 'get_post_generation_status').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_postBInspect

Get a single post by its ID, including full content in HTML and markdown

ParametersJSON Schema
NameRequiredDescriptionDefault
postIdYesThe post ID to retrieve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states what the tool does, not behavioral traits. It doesn't disclose error handling (e.g., invalid ID), authentication needs, rate limits, or whether it's read-only (implied but not explicit). This is inadequate for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—front-loaded with the core action and includes essential details (ID retrieval, content formats). Every word earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with 1 parameter and 100% schema coverage, the description is minimally adequate but lacks output details (no schema provided) and behavioral context. It covers the basic purpose but doesn't fully compensate for missing annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'postId' fully documented in the schema. The description adds no additional parameter semantics beyond implying retrieval by ID, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('a single post'), specifying retrieval by ID with content formats (HTML and markdown). It distinguishes from sibling 'list_posts' by focusing on a single item, but doesn't explicitly contrast with 'get_post_generation_status' or 'update_post'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing a specific post's full content, but provides no explicit guidance on when to use this versus alternatives like 'list_posts' for multiple posts or 'get_post_generation_status' for status checks. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_post_generation_statusAInspect

Check the status of an async post generation job. Returns job status and event log.

ParametersJSON Schema
NameRequiredDescriptionDefault
jobIdYesThe generation job ID to check
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'job status and event log', which adds useful behavioral context beyond the input schema. However, it doesn't mention other traits like whether it's idempotent, has rate limits, requires specific permissions, or how it handles invalid job IDs. For a status-checking tool with no annotations, this is adequate but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences that are front-loaded and efficient. The first sentence states the purpose, and the second adds behavioral context about the return value. There is no wasted language, and every sentence earns its place by providing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (status checking for async jobs), no annotations, and no output schema, the description is partially complete. It covers the purpose and return types ('job status and event log'), but lacks details on error handling, response structure, or integration with sibling tools like 'generate_post'. For a tool with no output schema, more detail on return values would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'jobId' parameter fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides (e.g., format examples or constraints). According to the rules, with high schema coverage (>80%), the baseline is 3 even without param info in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check the status of an async post generation job.' It specifies the verb ('check') and resource ('status of an async post generation job'), distinguishing it from siblings like 'generate_post' or 'get_post'. However, it doesn't explicitly differentiate from 'get_brand_identity_generation_status', which is a similar status-checking tool for a different resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'async post generation job', suggesting it should be used after initiating such a job (e.g., via 'generate_post'). However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_post' (for retrieving completed posts) or 'get_brand_identity_generation_status' (for checking status of brand identity jobs). No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_brand_identitiesBInspect

List all brand identities configured for your organization

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states it's a list operation but doesn't mention whether it returns all identities at once, uses pagination, requires specific permissions, or has rate limits. This leaves significant gaps for a tool that presumably accesses organizational data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It's appropriately sized for a simple list operation and front-loads the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema), the description is minimally adequate. However, with no annotations and no output schema, it should ideally mention what the return format looks like (e.g., list of objects with IDs/names) or any behavioral constraints. The current description meets basic requirements but leaves room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the lack of inputs. The description appropriately doesn't mention parameters, which aligns with the schema. Baseline for 0 parameters is 4, as there's nothing to compensate for.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all') and resource ('brand identities configured for your organization'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'get_brand_identity' or 'generate_brand_identity', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_brand_identity' (for retrieving a specific identity) or 'generate_brand_identity' (for creating new ones). There's no mention of prerequisites, context, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_integrationsBInspect

List all connected integrations (GitHub, Slack, Linear) for your organization

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool lists integrations but doesn't describe return format (e.g., JSON array), pagination, error handling, or authentication requirements. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action ('List all connected integrations') and provides essential context (examples and scope). There is no wasted verbiage or redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the output looks like (e.g., list format, fields), error conditions, or operational constraints. For a tool that returns data, this omission hinders the agent's ability to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately adds no parameter details, focusing instead on the tool's purpose. This meets the baseline for zero-parameter tools, though it doesn't exceed expectations by explaining output semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all connected integrations') and specifies the resource ('integrations (GitHub, Slack, Linear) for your organization'). It distinguishes from siblings like 'delete_integration' by focusing on listing rather than deletion. However, it doesn't explicitly differentiate from other list tools (e.g., 'list_brand_identities'), which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing admin access), exclusions (e.g., not for filtering), or compare it to other list tools (e.g., 'list_brand_identities'). The agent must infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_postsCInspect

List posts from Notra with optional filters for sorting, pagination, status, content type, and brand identity

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (default 1)
sortNoSort by creation date
limitNoItems per page (1-100, default 10)
statusNoFilter by status using a comma-separated list
contentTypeNoFilter by content type using a comma-separated list
brandIdentityIdNoFilter by brand identity ID using a comma-separated list
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states it's a list operation with filters. It doesn't disclose critical behaviors like whether this is a read-only operation (implied but not stated), pagination mechanics beyond mentioning it as an option, rate limits, authentication needs, or what the output format looks like (especially problematic without an output schema).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('List posts from Notra') followed by the optional features. There's no wasted verbiage, though it could be slightly more structured by separating core function from filter details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a list tool with 6 parameters and no output schema, the description is inadequate. It doesn't explain the return format (e.g., list of post objects, total count), pagination behavior (e.g., how 'page' and 'limit' interact), or error conditions. Without annotations or output schema, users lack essential context for proper tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description adds minimal value by listing the filter categories (sorting, pagination, status, content type, brand identity) but doesn't provide additional context beyond what's in the schema descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('posts from Notra'), making the purpose immediately understandable. However, it doesn't distinguish this tool from sibling tools like 'get_post' or 'list_brand_identities' beyond mentioning the resource type, which keeps it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_post' (for single posts) or 'list_brand_identities' (for other resources). It mentions optional filters but doesn't explain when filtering is appropriate or what the default behavior is without filters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_schedulesBInspect

List scheduled content generation jobs, optionally filtered by repository IDs

ParametersJSON Schema
NameRequiredDescriptionDefault
repositoryIdsNoOnly return schedules targeting these repository IDs
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the optional filtering capability but doesn't describe important behavioral traits like whether this is a read-only operation (implied but not stated), what the return format looks like, pagination behavior, error conditions, or rate limits. For a list operation with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of a single sentence that communicates the core purpose and key capability. Every word earns its place, with no redundant information or unnecessary elaboration. The structure immediately conveys what the tool does and its optional filtering feature.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (list operation with optional filtering), no annotations, and no output schema, the description is minimally adequate but incomplete. It covers the basic purpose and filtering capability but lacks information about return values, error handling, and operational constraints. The description should ideally provide more context about what information is returned in the list and any limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal value beyond what the input schema provides. It mentions optional filtering by repository IDs, which aligns with the single 'repositoryIds' parameter documented in the schema with 100% coverage. The description doesn't provide additional context about parameter usage, format expectations, or examples. With complete schema documentation, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('scheduled content generation jobs'), making the purpose immediately understandable. It distinguishes this tool from siblings like 'create_schedule' or 'delete_schedule' by focusing on retrieval rather than modification. However, it doesn't explicitly differentiate from 'list_posts' or 'list_brand_identities' in terms of what type of content is being scheduled.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by mentioning optional filtering by repository IDs, suggesting this tool is for viewing schedules rather than creating or deleting them. However, it doesn't explicitly state when to use this tool versus alternatives like 'list_posts' or 'list_brand_identities', nor does it provide exclusion criteria or prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_brand_identityCInspect

Update a brand identity's settings including name, tone, audience, language, and more

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoBrand identity name (1-120 characters)
audienceNoTarget audience description (min 10 chars)
languageNoContent language
isDefaultNoSet as default brand identity
customToneNoCustom tone description
websiteUrlNoWebsite URL
companyNameNoCompany name
toneProfileNoTone profile preset
brandIdentityIdYesThe brand identity ID to update
companyDescriptionNoCompany description (min 10 chars)
customInstructionsNoCustom instructions for content generation
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an update operation but doesn't mention whether it requires specific permissions, whether changes are reversible, what happens to unspecified fields (partial vs. full updates), rate limits, or what the response looks like. For a mutation tool with 11 parameters and no annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It could be slightly more structured by explicitly mentioning the required 'brandIdentityId' parameter, but overall it's appropriately sized with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 11 parameters, no annotations, and no output schema, the description is incomplete. It doesn't address behavioral aspects (permissions, side effects, response format) or provide usage context. The high parameter count and mutation nature demand more guidance than what's provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 11 parameters thoroughly. The description lists some example fields (name, tone, audience, language) but doesn't add meaningful semantics beyond what's in the schema descriptions. The 'and more' hint is vague. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Update') and resource ('brand identity's settings'), and lists specific fields that can be updated (name, tone, audience, language, and more). However, it doesn't explicitly differentiate this tool from its sibling 'update_post' or 'update_schedule', which are also update operations on different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing brand identity ID), when not to use it (e.g., for creating new brand identities), or refer to sibling tools like 'generate_brand_identity' for creation or 'get_brand_identity' for retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_postCInspect

Update a post's title, markdown content, or publication status

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoNew URL slug (lowercase kebab-case)
titleNoNew title (1-120 characters)
postIdYesThe post ID to update
statusNoSet status to draft or published
markdownNoNew markdown content
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool performs an update operation but lacks critical details: whether it requires authentication, if changes are reversible, what happens to unspecified fields (e.g., are they preserved or reset?), or error conditions. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and key updatable fields without unnecessary words. Every element earns its place, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a mutation tool with no annotations and no output schema, the description is incomplete. It fails to address behavioral aspects like permissions, side effects, or response format, leaving significant gaps for an agent to operate safely and effectively. The high schema coverage doesn't compensate for these missing contextual elements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (postId, slug, title, status, markdown). The description adds minimal value by listing updatable fields ('title, markdown content, or publication status'), which aligns with the schema but doesn't provide additional semantic context beyond what's already in the structured data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update') and resource ('a post') with specific updatable fields ('title, markdown content, or publication status'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'update_brand_identity' or 'update_schedule', which share the same verb pattern but target different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing post ID), exclusions (e.g., what fields cannot be updated), or comparisons to sibling tools like 'delete_post' or 'generate_post', leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_scheduleBInspect

Update an existing content generation schedule

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesSchedule name (1-120 characters)
enabledYesWhether the schedule is active
targetsYesRepositories the schedule should target
outputTypeYesType of content to generate
scheduleIdYesThe schedule ID to update
sourceTypeYesSchedule trigger type
autoPublishNoWhether to auto-publish generated content (default false)
outputConfigNoOptional publishing and voice settings
sourceConfigYesCron trigger configuration
lookbackWindowNoTime window for gathering data before generation (default: last_7_days)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an update operation, implying mutation, but provides no information about permissions required, whether changes are reversible, error handling, or what happens to unspecified fields during partial updates. For a mutation tool with 10 parameters and complex nested objects, this is a significant gap in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized for a tool with good schema documentation, and every word earns its place by clearly communicating the core function. No structural issues or wasted verbiage are present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex mutation tool with 10 parameters, nested objects, no annotations, and no output schema, the description is insufficient. It doesn't address behavioral aspects like what happens during updates, error conditions, or response format. While the schema provides parameter documentation, the description fails to provide the necessary context about how the tool behaves when invoked, making it incomplete for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are documented in the schema itself. The description adds no specific parameter information beyond the generic 'update' context. It doesn't explain relationships between parameters, dependencies, or provide examples of valid configurations. With complete schema coverage, the baseline score of 3 is appropriate as the description doesn't add value beyond what's already in structured data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('update') and resource ('existing content generation schedule'), making the purpose unambiguous. It distinguishes from sibling tools like 'create_schedule' and 'delete_schedule' by specifying it's for updating existing schedules rather than creating or deleting them. However, it doesn't specify what aspects of the schedule can be updated beyond the generic term.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'existing content generation schedule,' suggesting it should be used when modifying rather than creating schedules. However, it provides no explicit guidance on when to use this versus alternatives like 'create_schedule' or 'delete_schedule,' nor does it mention prerequisites like needing a valid schedule ID. The context is clear but lacks explicit alternatives or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.