SendIt
Server Details
AI-native social media publishing to LinkedIn, Instagram, Threads, TikTok, and X.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 148 of 148 tools scored. Lowest: 2.4/5.
Many tools have overlapping purposes, such as multiple content generation and scheduling tools (generate_content, publish_ai, schedule_content, schedule_content_advanced, bulk_schedule). The large number of tools (148) makes it difficult for an agent to distinguish between them, even with good individual descriptions.
Most tools follow a consistent verb_noun pattern (e.g., analyze_project, approve_post, delete_scheduled_post). There are minor deviations like 'autopilot_approve_plan' and 'connect_platform', but overall naming is predictable and readable.
148 tools is excessively high for a social media management server. This indicates feature bloat and makes the tool surface overwhelming. A well-scoped server typically has 3-15 tools; this is far beyond that.
The tool set covers nearly every aspect of social media management: publishing, scheduling, analytics, AI generation, workflows, ad campaigns, library management, and more. There are no obvious gaps for the stated purpose.
Available Tools
148 toolsanalyze_projectAnalyze ProjectCInspect
Analyze a project for audience, positioning, differentiators, content pillars, and risks.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. | |
| projectId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite annotations (readOnlyHint=false, openWorldHint=true), the description omits behavioral traits like whether it creates a record, requires permissions, or has side effects. The description adds no extra context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence, front-loaded with the action and scope. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and the tool's analytical nature, the description fails to mention return format, whether results are saved, or how to retrieve them. Incomplete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (only team_id has a description). The description does not elaborate on parameter meaning or usage, e.g., what projectId refers to or how team_id behaves.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool analyzes a project for specific aspects like audience, positioning, etc. However, it does not explicitly distinguish from sibling tools like get_project_analysis or generate_project_strategy.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description lacks context on prerequisites or scenarios for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
approve_postApprove PostAInspect
Approve a scheduled post that is pending approval. Once approved, it will be published at the scheduled time.
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | The scheduled post ID to approve | |
| comment | No | Optional approval comment | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate the tool is not read-only and not idempotent. The description adds that approval leads to publication at the scheduled time, which is key behavioral context. No annotation contradictions. It does not discuss reversibility or side effects, but the combination is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose. No redundant information. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with three parameters and no output schema, the description explains the effect adequately. It does not cover error conditions or edge cases, but the annotations and schema fill in most gaps. Could be slightly more comprehensive regarding prerequisites (e.g., post must be scheduled and pending).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions for all three parameters. The tool description adds no additional parameter details beyond the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (approve), the resource (scheduled post pending approval), and the effect (published at scheduled time). It effectively distinguishes from sibling tools like reject_post and delete_scheduled_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool should be used when a scheduled post is pending approval, but it does not explicitly mention alternatives or exclusions, such as when to use reject_post instead. The context is clear but lacks direct comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
autopilot_approve_planApprove Autopilot PlanAInspect
Approve all pending items in an autopilot plan, or approve/reject individual items.
After approval, use autopilot_execute_plan to schedule the approved posts.
| Name | Required | Description | Default |
|---|---|---|---|
| action | No | Action for individual item (default: approve) | |
| itemId | No | Individual item ID to approve or reject | |
| planId | No | Plan ID to approve (approves all pending items) | |
| feedback | No | Feedback when rejecting an item |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate mutation (readOnlyHint=false, destructiveHint=false). Description adds the batch vs individual distinction but doesn't disclose beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no waste. First sentence states main function, second gives follow-up action. Perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers the two modes well. Could clarify that planId and itemId should not be used together, but schema handles individual parameters. No output schema, but not required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds little new meaning. It paraphrases the schema descriptions without providing additional value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it approves all pending items or approve/reject individual items, using specific verbs and resource. It distinguishes from siblings like autopilot_execute_plan and approve_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use autopilot_execute_plan after approval, providing clear context. No when-not-to-use, but the alternative is named with purpose differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
autopilot_execute_planExecute Autopilot PlanAInspect
Schedule all approved posts in an autopilot plan.
Posts are spaced out over time and scheduled via the standard publishing pipeline.
| Name | Required | Description | Default |
|---|---|---|---|
| planId | Yes | Plan ID to execute |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=false, destructiveHint=false) are present, and description adds useful context about posts being spaced out and using standard publishing pipeline. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action verb. No extraneous information; every sentence is informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple tool with no output schema. Covers purpose and behavioral nuances. Could mention that it only works with approved plans, but approval is implied in 'approved posts'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Single parameter planId fully described in schema (100% coverage). Description does not add new meaning beyond schema, meeting baseline but not exceeding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'schedule' and resource 'all approved posts in an autopilot plan', and distinguishes from siblings like autopilot_approve_plan and autopilot_generate_plan by focusing on execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use after plan approval, but lacks explicit guidance on when to use over alternatives like bulk_schedule or schedule_content. No exclusionary language or context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
autopilot_generate_planGenerate Autopilot PlanAInspect
Generate a weekly content plan for an autopilot goal.
Chains 5 AI agents: Strategy Planner -> Content Ideation -> Calendar Optimizer -> Multi-Format Composer -> Variant Repurposer. Returns a plan with individual post items for review.
| Name | Required | Description | Default |
|---|---|---|---|
| goalId | Yes | The autopilot goal ID to generate a plan for | |
| weekNumber | No | Specific week number to generate (optional, auto-increments) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds value beyond annotations by revealing the chain of 5 AI agents and the output type, which complements the openWorldHint=true annotation. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three clear, front-loaded sentences covering purpose, internal process, and output. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks details about return format, side effects, and idempotency. For a complex tool with no output schema, more information about the plan structure or handling would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters adequately described. The tool description does not add extra semantics beyond what is already in the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Generate a weekly content plan for an autopilot goal' with specific verb+resource and distinguishes from siblings like autopilot_execute_plan and autopilot_approve_plan by specifying the output as a plan for review.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the plan is for review before approval/execution but does not explicitly state when to use this tool vs alternatives like generate_content or generate_post_bundle, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
autopilot_get_progressGet Autopilot ProgressARead-onlyInspect
Get progress metrics for an autopilot goal — weeks planned, posts approved, scheduled, published.
| Name | Required | Description | Default |
|---|---|---|---|
| goalId | Yes | Goal ID to check progress for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description adds value by specifying the returned metrics (weeks planned, etc.). There is no contradiction, and the additional context helps an agent understand what data to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence (15 words) that is front-loaded and contains no fluff. Every word serves a purpose, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description lists sample metrics, giving an agent a reasonable expectation of the return content. However, it does not specify the exact structure (e.g., object with fields), leaving minor ambiguity for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for the only parameter (goalId). The description adds no extra meaning beyond the schema, meeting the baseline for high coverage without further improvement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action ('Get progress metrics') and the specific resource ('an autopilot goal'), listing concrete metrics (weeks planned, posts approved, scheduled, published). This distinguishes it from sibling tools like autopilot_list_goals and other read-only tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing progress for a specific goal, but provides no explicit guidance on when to use this tool over alternatives (e.g., autopilot_list_goals for listing goals, or get_analytics for broader analytics). No 'when not to use' or mentions of prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
autopilot_list_goalsList Autopilot GoalsARead-onlyInspect
List all autopilot goals for the current user.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds the scope 'for the current user,' which is useful, but does not disclose any other behavioral details like pagination or order.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, concise sentence that immediately communicates the tool's purpose with no unnecessary words. It is well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and no output schema, the description is adequate. It could clarify what 'autopilot goals' are relative to other goal tools like create_goal, but overall it provides enough context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so the description does not need to add parameter information. The schema coverage is 100%, and the description is sufficient for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (list), the resource (autopilot goals), and the scope (for the current user). It differentiates from sibling tools like autopilot_set_goal and other list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. Siblings like autopilot_get_progress or autopilot_generate_plan exist but are not mentioned, and no context is provided for when listing goals is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
autopilot_set_goalSet Autopilot GoalBInspect
Set a social media goal for the AI autopilot to work toward.
The autopilot will use AI agents to plan content, generate posts, and schedule them. Example goals: 'Grow LinkedIn followers by 20%', 'Post 5x/week on Instagram about AI'.
| Name | Required | Description | Default |
|---|---|---|---|
| goalText | Yes | The social media goal to achieve | |
| platforms | Yes | Target platforms (e.g. ['twitter', 'linkedin', 'instagram']) | |
| durationWeeks | No | How many weeks to run the autopilot (1-52, default 4) | |
| targetMetrics | No | Target metrics (e.g. { followers: 1000, engagement_rate: 0.05 }) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate mutation and nondeterminism. Description does not elaborate on behavior beyond stating autopilot will plan and schedule. No mention of return value or asynchronous behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences and an example; no extraneous content. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks output schema and does not describe what the tool returns (e.g., goal ID). With 4 parameters and no return info, it is adequate but not fully complete given the ecosystem of autopilot tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds example values for goalText and platforms but does not detail durationWeeks or targetMetrics beyond schema. Baseline 3 appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Describes setting a social media goal for the autopilot, with examples. Differentiates implicitly as initial step but does not explicitly distinguish from sibling tools like autopilot_generate_plan.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus alternatives, no prerequisites or context on when not to use. Simply states what it does without usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bulk_scheduleBulk Schedule PostsAInspect
Schedule multiple posts at once from CSV content.
USE THIS WHEN: • User has a spreadsheet or list of posts to schedule • Planning a content calendar for a month • Migrating content from another tool
CSV FORMAT (required columns): • platform: linkedin, instagram, x, tiktok, threads • scheduled_time: ISO 8601 format (e.g., 2024-02-15T10:00:00Z) • text: Post content/caption
OPTIONAL COLUMNS: • media_url: Image or video URL • first_comment: First comment to add (Instagram/LinkedIn) • hashtags: Additional hashtags to append
PROCESS:
First call with validate_only: true to check for errors
Review validation report with user
Call again with validate_only: false to execute import
| Name | Required | Description | Default |
|---|---|---|---|
| filename | No | Optional filename for tracking (default: upload.csv) | |
| csv_content | Yes | CSV content as a string (include header row) | |
| skip_errors | No | If true, skip rows with errors and schedule valid rows only | |
| validate_only | No | If true, only validate without scheduling. Default: true for safety. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations show destructiveHint=false and readOnlyHint=false, which the description complements by explaining the two-step validation process. It adds context about CSV format and required/optional columns, but could be more explicit about the output of validation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections, front-loaded purpose, and no unnecessary sentences. Every line adds value, using bullet points and process steps for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description covers the process, CSV format, and validation step. It is complete for a complex tool, though the output of the validation report could be briefly described.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers 100% of parameters with descriptions. The description adds significant value by detailing CSV format, required columns, optional columns, and the process flow, going beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it schedules multiple posts from CSV content, using specific verbs and resource. It distinguishes itself from siblings like schedule_content and schedule_content_advanced by focusing on bulk CSV import.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit USE THIS WHEN scenarios are provided, along with a clear two-step process (validate then execute). It implicitly tells when to use vs other tools by emphasizing bulk CSV scheduling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bulk_update_calendar_eventsBulk Update Calendar EventsAInspect
Apply one bulk action to many calendar events at once, including shifts, explicit reschedules, queue moves, cancellations, or assignments.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Calendar event IDs or scheduled post IDs to update. | |
| shift | No | ||
| action | Yes | Bulk action to apply. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| timezone | No | Optional IANA timezone for queue-based moves. | |
| applyScope | No | Whether to affect only the selected IDs or their recurrence groups. | |
| assigneeId | No | Required for the 'assign' action. | |
| scheduledAt | No | Replacement ISO 8601 time for the 'set_time' action. | |
| deltaMinutes | No | Compatibility shift amount in minutes. | |
| scheduledTime | No | Compatibility alias for scheduledAt. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, destructiveHint=false, idempotentHint=false. The description lists actions including 'cancel', which could be considered destructive, but annotations contradict that implication. However, the description does not provide additional behavioral context beyond the action list, such as side effects, permission requirements, or reversibility. No explicit contradiction exists, but the agent gains limited insight beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that covers the essential purpose and action types. It is front-loaded and contains no unnecessary words. Minor improvement could be structuring the action list for clarity, but current form is concise and adequate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, conditional requirements, no output schema), the description lacks details on return values, error handling, partial failures, or conditional parameter usage (e.g., assigneeId required for 'assign'). It adequately lists actions but does not fully equip the agent to handle edge cases, making it minimally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 90%, so the schema already documents most parameters well. The description adds no additional meaning beyond listing actions, which are already in the schema enum. The baseline of 3 is appropriate as the description provides marginal value over the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool applies a bulk action to multiple calendar events, enumerating specific actions (shifts, reschedules, queue moves, cancellations, assignments). It uses specific verbs and resources, and the name 'bulk_update_calendar_events' distinguishes it from sibling tools like 'update_calendar_event' (single event) and 'bulk_schedule' (scheduling, not updating).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for bulk operations but does not explicitly state when to use or when to avoid, nor does it mention alternatives like 'update_calendar_event' for single updates. No exclusions or prerequisites are provided, leaving the agent to infer context from the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cancel_recurring_seriesCancel Recurring SeriesADestructiveInspect
Cancel a recurring series and stop future pending occurrences from publishing.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for seriesId. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| seriesId | No | Recurring series parent scheduled post ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true and readOnlyHint=false. The description adds that it stops future pending occurrences, which provides additional context, but does not detail reversibility or effects on already published posts.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single concise sentence that efficiently communicates the tool's action without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the core action but omits information about the return value or result confirmation. Given the simple nature of the operation, this is adequate but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% parameter description coverage. The description adds no extra meaning beyond the schema's clear definitions for seriesId, id, and team_id.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it cancels a recurring series and stops future pending occurrences. It uses a specific verb ('cancel') and resource ('recurring series'), differentiating it from sibling tools like 'update_recurring_series' and 'list_recurring_series'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for canceling series but lacks explicit guidance on when to use it versus alternatives. No exclusions or prerequisite conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_ai_media_statusCheck AI Media StatusARead-onlyIdempotentInspect
Check the status of an AI media generation job.
Returns current status (pending, processing, completed, failed) and result URL when complete.
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | The AI media job ID to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint. The description adds value by specifying the possible statuses (pending, processing, completed, failed) and the result URL, going beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no fluff. The first sentence states the purpose, the second adds return details, making it front-loaded and highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple polling tool with one parameter and no output schema, the description sufficiently covers behavior and return values. It does not mention error handling but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter 'job_id' is already well-defined. The description does not add additional semantic information beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Check' and the resource 'status of an AI media generation job,' distinguishing it from the sibling tool 'generate_ai_media' which creates the job.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use after starting a job, and mentions the return values (status and result URL), providing clear context. However, it lacks explicit when-not-to-use guidance or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
complete_goalComplete Product Hunt GoalADestructiveInspect
Mark a Product Hunt goal as complete or incomplete.
REQUIREMENTS: • Must have Product Hunt account connected • Write access requires app whitelisting by Product Hunt
| Name | Required | Description | Default |
|---|---|---|---|
| goalId | Yes | The ID of the goal to update | |
| completed | Yes | Set to true to mark complete, false to mark incomplete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructive and non-read-only behavior. The description adds useful prerequisites (account connection, write access whitelisting) beyond what annotations provide, enhancing transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: one line for the action and two bullet points for requirements. No fluff, every sentence is essential.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple boolean toggle tool, the description covers the action and prerequisites adequately. There is no output schema, but none is needed given the simplicity. The description is complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description does not need to add parameter details. It provides no additional semantics beyond what the schema already documents, hence baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action: 'Mark a Product Hunt goal as complete or incomplete.' This includes a specific verb ('mark') and resource ('goal'), and distinguishes from sibling tools like 'create_goal' and 'autopilot_set_goal'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists prerequisites (connected account, whitelisting) but does not provide guidance on when to use this tool versus alternatives like 'autopilot_set_goal.' Usage is implied but not explicitly compared.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_blueskyConnect BlueskyARead-onlyIdempotentInspect
Get the credential setup schema to connect Bluesky using API key or token credentials.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds context about the type of schema returned, which provides additional behavioral insight beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action and resource, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, but the description explains it returns a credential setup schema. For a 0-param tool, this is sufficient; however, it does not detail what elements are in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, so no parameter explanation is needed. The description mentions 'API key or token credentials' giving context, earning the baseline of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the credential setup schema for Bluesky, using a specific verb and resource, distinguishing it from sibling connect_* tools by naming the platform.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used before connecting Bluesky, but does not provide explicit guidance on when to use it versus other connect_* tools or any prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_connectorConnect a ConnectorAInspect
Initiate authentication flow for a connector.
For OAuth2 connectors, returns an authorization URL. For API key connectors, stores the provided key. For webhook connectors, registers the webhook URL.
| Name | Required | Description | Default |
|---|---|---|---|
| apiKey | No | API key (for api_key auth strategy connectors) | |
| botToken | No | Bot token (for bot_token auth strategy connectors like Telegram, Discord) | |
| webhookUrl | No | Webhook URL (for webhook auth strategy connectors) | |
| connectorId | Yes | Connector ID to connect |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description outlines distinct behaviors per auth type (returning URL, storing key, registering webhook), adding transparency beyond annotations. However, it does not disclose potential side effects (e.g., overwriting existing keys) or async behavior, though annotations (openWorldHint=true) note open-world effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (three short sentences) with front-loaded main action and bulleted auth types. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the primary behavior for each auth type. Missing details include response structure (only OAuth2 mentions a return), error handling, and post-authentication steps. Still adequate given schema and annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds value by explaining which parameters are relevant for each auth type (e.g., apiKey for API key, webhookUrl for webhook), aiding correct parameter selection.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it initiates authentication flow for a connector, specifying three auth types. However, it does not explicitly distinguish this generic tool from the many platform-specific connect_* siblings (e.g., connect_bluesky), which may cause confusion about when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus the specific connect_* tools or disconnect_connector. Prerequisites (e.g., connector must exist) are not mentioned, nor is the expected workflow after obtaining the authorization URL.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_devtoConnect DEV.toARead-onlyIdempotentInspect
Get the credential setup schema to connect DEV.to using API key or token credentials.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds context that the tool returns a schema for API key or token credentials, aligning with safety cues. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the action and resource. Every word is necessary and useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description adequately explains the purpose. It could hint at next steps but remains sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so schema coverage is 100%. The description does not need to add parameter detail. Baseline of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns the credential setup schema for connecting DEV.to, specifying the action ('get') and resource ('credential setup schema'). It distinguishes from siblings by naming the specific platform DEV.to.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when connecting DEV.to but does not provide explicit guidance on when to use or not use this tool versus the many other connect_ siblings. No alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_discordConnect DiscordBRead-onlyIdempotentInspect
Get the webhook setup schema to connect Discord.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, non-destructive. The description adds little beyond 'get', which is consistent. No additional behavioral insights (e.g., whether it requires authentication or generates dynamic data).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Perfectly concise and front-loaded with the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool has no parameters and annotations cover safety, the description does not explain what the returned schema contains or how it should be used. Given no output schema, a bit more context would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the description has no obligation to explain them. The baseline score of 4 applies. No extra value is added, but none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a webhook setup schema for Discord. It uses a specific verb ('Get') and resource ('webhook setup schema'), and the platform name distinguishes it from many sibling 'connect_*' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus other connect tools. The description does not mention prerequisites, alternatives, or context for using the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_dribbbleConnect DribbbleARead-onlyIdempotentInspect
Get the OAuth URL to connect your Dribbble account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds value by specifying that the output is an OAuth URL, which is a behavioral hint, but it does not elaborate on authentication flows or additional side effects. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that immediately communicates the tool's purpose. It is appropriately sized with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, no output schema, and clear annotations, the description fully covers what an agent needs to know: it retrieves a Dribbble connection URL. No additional context is required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, and schema description coverage is 100%. With zero parameters, the baseline is 4, and the description does not need to provide parameter information beyond what is already clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action ('Get the OAuth URL') and the target resource ('to connect your Dribbble account'), making the purpose immediately clear and distinguishing it from sibling connect tools for other platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to initiate a Dribbble connection, but it does not provide explicit guidance on when to use versus alternatives (e.g., re-authentication scenarios) or mention any preconditions. The context is implied but not detailed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_facebookConnect FacebookARead-onlyIdempotentInspect
Get the OAuth URL to connect your Facebook account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint, openWorldHint, idempotentHint, and non-destructive behavior. The description adds value by stating the output is an OAuth URL, which is not captured in annotations. However, it does not elaborate on side effects or auth requirements, though the annotations mostly cover safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that efficiently conveys the tool's purpose without excess. Every word contributes meaning, and it is front-loaded with the verb 'Get'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description sufficiently explains the tool's function and return value (OAuth URL). It is complete for a simple tool; no additional information is needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters with 100% coverage, so the baseline is 3. The description does not need to add parameter details. The lack of parameters is already clear, and the description adds no ambiguity, earning a slight bonus for clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves an OAuth URL for Facebook connection, specifying the action ('Get') and the resource ('OAuth URL to connect your Facebook account'). It effectively distinguishes from sibling tools like connect_bluesky by naming the specific platform.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives (e.g., connect_bluesky). Users must infer from the platform name. No context is given about prerequisites or alternative workflows, which is a minor gap given the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_gmbConnect Google My BusinessARead-onlyIdempotentInspect
Get the OAuth URL to connect your Google My Business account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description adds that it returns an OAuth URL, but does not elaborate on authentication flow or side effects beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the action and object. Every word adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema, clear annotations), the description provides enough context for an agent to understand its purpose. However, it does not specify the return format, which is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is effectively 100%. The description does not need to add parameter information, and the baseline is 4. The description is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb 'Get' and resource 'OAuth URL' for connecting Google My Business. It distinguishes from sibling connect_* tools by naming the specific service.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for connecting GMB but provides no explicit guidance on when to use this tool versus alternatives or any context about prerequisites. It is minimally adequate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_hashnodeConnect HashnodeARead-onlyIdempotentInspect
Get the credential setup schema to connect Hashnode using API key or token credentials.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly, idempotent, and nondestructive. Description adds that it returns a schema but no further behavioral detail beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One clear sentence with no unnecessary words. Front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, full annotation coverage, and no output schema, the description fully covers the tool's purpose and behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so description provides sufficient context. Schema coverage is 100% automatically.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves a credential setup schema for Hashnode using API key or token credentials. Specific verb and resource, distinct from sibling connect_* tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when or when not to use this tool versus other connect tools. Lacks context or prerequisite information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_instagramConnect InstagramARead-onlyIdempotentInspect
Get the OAuth URL to connect your Instagram account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, but the description adds behavioral context by specifying that the return value is an OAuth URL, which is not captured in annotations. This clarifies the operation's output nature beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that immediately states the core function. Every word is necessary and contributes to understanding, with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description provides sufficient detail by stating the return type (OAuth URL). It could marginally benefit from mentioning that the URL is used for authorization, but it is complete enough for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters with 100% coverage, so the baseline is 4. The description does not need to add parameter information, and it correctly omits any unnecessary detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' and clearly identifies the resource as 'the OAuth URL to connect your Instagram account'. Among many sibling 'connect_*' tools, it is uniquely differentiated by the platform name 'Instagram'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for connecting an Instagram account, but does not explicitly state when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. The context of being one of many similar connect tools would benefit from such guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_lemmyConnect LemmyARead-onlyIdempotentInspect
Get the credential setup schema to connect Lemmy using API key or token credentials.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds that the tool returns a credential setup schema, which is useful but does not disclose additional behavioral traits beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words. It conveys the essential information efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no input schema, no output schema), the description is complete enough. It identifies the output as a credential setup schema, which is sufficient for an agent to understand the tool's function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters, so the baseline is 4. The description does not need to elaborate on parameters, and it maintains clarity without adding unnecessary detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the credential setup schema to connect Lemmy using API key or token credentials.' It specifies the resource (credential setup schema) and the action (get), effectively distinguishing it from sibling connect tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when connecting Lemmy, providing clear context. However, it does not explicitly state when not to use it or mention alternatives, though the sibling list makes the purpose obvious.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_linkedinConnect LinkedInARead-onlyIdempotentInspect
Get the OAuth URL to connect your LinkedIn account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds that it returns an OAuth URL, but does not contradict or significantly extend the behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, clear, and to the point with no unnecessary words. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no parameters, no output schema, and annotations provide rich behavioral context. The description fully covers what the tool does and what it returns, making it complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters; schema description coverage is 100%. Per guidelines, baseline is 4 for 0 parameters. Description does not need to add parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'OAuth URL', and the purpose 'connect your LinkedIn account'. It is specific and distinguishes from sibling connect_* tools for other platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for connecting LinkedIn, but does not explicitly mention when to use this tool vs alternatives. However, given the tool's simplicity and the unique platform, this is clear without exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_linkedin-pageConnect LinkedIn PageARead-onlyIdempotentInspect
Get the OAuth URL to connect your LinkedIn Page account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare non-destructive and idempotent behavior; the description adds that it returns an OAuth URL, providing useful context beyond the annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, complete sentence with no filler words, efficiently conveying the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description explains the return value (OAuth URL) and the operation is simple, making it fully complete for this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With no parameters and 100% schema coverage, the description adds no parameter details but none are needed. Baseline 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the specific resource 'OAuth URL to connect your LinkedIn Page account', distinguishing it from sibling tools like connect_linkedin (for personal accounts) by explicitly mentioning 'LinkedIn Page'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to connect a LinkedIn Page, but provides no explicit guidance on when to use versus alternatives like connect_linkedin for personal accounts or other connect tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_mastodonConnect MastodonARead-onlyIdempotentInspect
Get the OAuth URL to connect your Mastodon account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide rich behavioral hints (readOnly=true, idempotent, not destructive). Description adds no extra behavioral context. Adequate but does not build on annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Front-loaded with purpose. Highly concise and structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple OAuth URL retrieval with no parameters and rich annotations, the description is complete. No gaps noted given complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema (100% coverage). Description does not need to add parameter info. Baseline 4 for zero-parameter tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns an OAuth URL for connecting a Mastodon account, using specific verb 'Get' and resource 'OAuth URL'. Differentiates from sibling 'connect_*' tools by platform.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool. However, context from siblings for different platforms implies usage is for initiating Mastodon authentication. Lacks explicit when/when-not or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_nostrConnect NostrARead-onlyIdempotentInspect
Get the custom connection setup instructions for Nostr. Note: This is an unofficial connector and may change without notice.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly and idempotent. Description adds valuable context: it is an unofficial connector that may change without notice, which goes beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no wasted words. Front-loaded: immediately states purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema; description does not clarify what the instructions look like (e.g., text, endpoint, step list). For a parameterless tool, agent still needs to know return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters; schema coverage is 100%. Description adds no param info but is not needed. Baseline 4 as per guidelines for 0 params.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get the custom connection setup instructions for Nostr.' Differentiates from sibling connect_ tools by specifying Nostr and the action 'get instructions'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes a warning about being unofficial and changeable, but no explicit when-to-use or alternatives. Context is implied given the sibling list of similar connect tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_pinterestConnect PinterestARead-onlyIdempotentInspect
Get the OAuth URL to connect your Pinterest account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint flags. The description ('Get the OAuth URL') is consistent with these but does not add any additional behavioral context beyond what annotations convey. The tool is simple, so no further disclosure is necessary, but credit is limited.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence that front-loads the action ('Get the OAuth URL'). Every word is necessary. No fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with no parameters and no output schema, the description tells the essential function. However, it could mention that the output is a URL used for user authorization, adding a bit more completeness. Still, it is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and schema coverage is 100%. The description does not need to explain parameters. The baseline for a parameterless tool is 4, and the description adds no extra meaning, which is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides an OAuth URL to connect a Pinterest account. The verb 'Get' and resource 'OAuth URL' are specific, and the tool is well-differentiated from sibling connect_* tools for other platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool or when to avoid it. The context implies it is used for obtaining an OAuth URL for Pinterest, but no guidance on prerequisites or alternatives (e.g., if already connected) is provided. Adequate but minimal.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_platformConnect PlatformBRead-onlyIdempotentInspect
Connect a platform dynamically by platform ID. Returns OAuth URL, webhook setup details, or credential form contract depending on the platform auth method.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | Platform ID to connect |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and non-destructive. Description adds that return varies by auth method. No contradiction, but description doesn't elaborate on behavioral side effects beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 18 words, front-loaded. Efficient but could be more structured with bullet points for return types.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 1-param tool with no output schema, description covers main action and return result. Lacks info on prerequisites or required permissions, but acceptable given simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with enum. Description says 'by platform ID' which adds minimal semantics beyond the property name. No additional guidance on source of platform ID.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it connects a platform by ID and returns auth-specific details. However, it doesn't differentiate from numerous sibling connect_<platform> tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this generic tool vs the specific connect_ tools for each platform. Agent has no decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_producthuntConnect Product HuntARead-onlyIdempotentInspect
Get the OAuth URL to connect your Product Hunt account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adding 'Get the OAuth URL' aligns with read-only behavior. It doesn't add significant behavioral context beyond what annotations provide, such as details about the OAuth flow or response format, but the safety profile is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the action 'Get the OAuth URL'. It contains no unnecessary words and efficiently conveys the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema, and basic annotations), the description is largely complete. It could optionally mention that the URL needs to be opened in a browser, but this is not essential for an AI agent to understand the tool's function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage, the baseline is 4. The description adds meaning by specifying the output (OAuth URL) and the platform (Product Hunt), which is sufficient for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to obtain an OAuth URL for connecting a Product Hunt account. It uses a specific verb ('get') and resource ('OAuth URL for Product Hunt'), distinguishing it from other connect_ tools for different platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool should be used when needing to authenticate a Product Hunt account. While it doesn't explicitly state when not to use it or mention alternatives, the context (many sibling connect_ tools) makes the purpose clear enough for an agent to select correctly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_slackConnect SlackBRead-onlyIdempotentInspect
Get the webhook setup schema to connect Slack.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, and openWorldHint. The description adds that it returns a schema for setup, which is consistent but does not disclose additional behavioral traits like rate limits or auth requirements. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no extraneous words. It is appropriately sized for a simple, no-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should clarify the return value. 'Webhook setup schema' is vague and doesn't explain its structure or usage. For a simple tool, it is adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters and schema coverage is 100%. Per guidelines, 0 parameters gives a baseline of 4. The description does not need to add parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb 'get' and the resource 'webhook setup schema' for Slack. It clearly distinguishes from sibling connect_* tools by naming the specific platform. However, 'webhook setup schema' could be more precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus other connect_* tools or connect_connector. It does not provide context on prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_telegramConnect TelegramARead-onlyIdempotentInspect
Get the credential setup schema to connect Telegram using API key or token credentials.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and no destructiveness. The description confirms a read operation ('Get') and specifies the credential type (API key/token), adding modest context beyond the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words. It is front-loaded and efficient, earning its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, rich annotations, and no output schema, the description is fairly complete. However, it would benefit from a brief note on what the credential setup schema contains or how it is used, as the tool is part of a connection flow and the return value is not documented elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so schema coverage is 100% trivially. Baseline for 0 parameters is 4, and the description does not attempt to add parameter information, which is appropriate. The description aligns with the schema's emptiness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action ('Get the credential setup schema') and the specific resource ('connect Telegram using API key or token credentials'), effectively distinguishing it from many sibling connect_* tools targeting other platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used before establishing a Telegram connection, but it does not explicitly state when to use it or how it relates to other connection steps. No alternative tools or exclusions are mentioned, leaving the agent to infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_threadsConnect ThreadsARead-onlyIdempotentInspect
Get the OAuth URL to connect your Threads account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds that the tool returns an OAuth URL, which is consistent with these annotations and provides useful context about the operation's result.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no redundant information. Every word serves a purpose, making it highly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with no parameters and good annotations. The description adequately explains the return value (OAuth URL) and purpose. With no output schema, this is sufficient for an agent to understand what the tool provides.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and schema description coverage is 100%. Baseline for no parameters is 4, and the description does not need to add parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get the OAuth URL') and the specific platform ('connect your Threads account'). It distinguishes from sibling connect_* tools by naming the platform explicitly, ensuring the agent understands which account connection is intended.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by naming the platform, but it does not explicitly state when not to use this tool or suggest alternatives. However, the sibling tools are named for different platforms, so the intended usage is sufficiently implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_tiktokConnect TikTokARead-onlyIdempotentInspect
Get the OAuth URL to connect your TikTok account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description ('Get the OAuth URL') aligns with these but adds no behavioral context beyond what annotations provide. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, succinct sentence that conveys the tool's purpose without any extraneous words. It is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is adequate for a simple OAuth initiation tool with no parameters or output schema. However, it could be improved by mentioning that the URL should be opened in a browser to complete the connection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so schema description coverage is 100%. The description does not need to add parameter information; it is sufficient as is.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets an OAuth URL for connecting a TikTok account. It uses a specific verb ('Get') and resource ('OAuth URL to connect your TikTok account'), distinguishing it from sibling connect_* tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives or when not to use it. The context is implied by the sibling tool names, but no further direction is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_whopConnect WhopARead-onlyIdempotentInspect
Get the credential setup schema to connect Whop using API key or token credentials. Note: This is an unofficial connector and may change without notice.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destableHint=false, etc. The description adds value by stating the connector is unofficial and subject to change, which is not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the key action, and contains no redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple credential schema retrieval with no parameters or output schema, the description is mostly complete. It could briefly mention what the schema contains or how to use it, but the caveat about instability adds necessary context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (schema coverage 100%), so the description does not need to document parameter semantics. It mentions using API key or token credentials, aligning with the purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), the resource ('credential setup schema'), and the target ('Whop'). It is specific and distinguishes from other connect_* tools by naming the platform.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes the tool is unofficial and may change without notice, providing a reliability caveat. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., other connector tools) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_xConnect XARead-onlyIdempotentInspect
Get the OAuth URL to connect your X account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnly, idempotent, and non-destructive hints; description adds minimal context (the OAuth URL endpoint) but does not disclose details like URL expiration or one-time use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, front-loaded sentence with zero waste; every word contributes to the purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool is simple with no parameters and rich annotations, the description does not explain the return value format or the OAuth flow steps, leaving some ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist in the schema, and schema coverage is 100%, so the description does not need to add parameter semantics; baseline 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('OAuth URL') with explicit context ('connect your X account'), clearly distinguishing it from siblings handling other platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for connecting X accounts, and the context includes many sibling connect_ tools, but no explicit when-to-use or when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_youtubeConnect YouTubeARead-onlyIdempotentInspect
Get the OAuth URL to connect your YouTube account.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds that the tool returns an OAuth URL, which is consistent but does not disclose additional behavioral traits like authentication prerequisites or state effects beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with the verb 'Get', and contains zero wasted words. Every part serves a purpose, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no parameters, clear annotations), the description provides the essential purpose but omits details like how to use the returned URL or potential side effects. It is minimally viable but could be more complete for an agent unfamiliar with OAuth flows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so schema description coverage is 100%. The description adds meaning by explaining the output (OAuth URL), which is helpful. With zero parameters, the description adequately compensates for the lack of parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Get), the resource (OAuth URL), and the target (connect YouTube account). It is specific and directly tells the agent what the tool does, distinguishing it from sibling connect tools by platform name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies use when needing to connect YouTube via OAuth, it does not explicitly provide when-not-to-use scenarios or alternative tools. However, the sibling tools for other platforms make the context clear, so guidance is adequate but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_ad_campaignCreate Ad CampaignBDestructiveInspect
Create a new advertising campaign on a connected ad platform.
Supports campaign creation across Meta Ads, Google Ads, LinkedIn Ads, TikTok Ads, Pinterest Ads, and all other connected ad platforms.
The AI Campaign Builder agent can help design optimal campaign structures.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Campaign name | |
| endDate | No | Campaign end date (optional) | |
| objective | Yes | Campaign objective | |
| startDate | No | Campaign start date (ISO 8601) | |
| targeting | No | Targeting configuration (platform-specific) | |
| budgetType | Yes | Budget type | |
| adAccountId | Yes | Ad account ID | |
| budgetAmount | Yes | Budget amount in account currency |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true but description only says 'create', which is additive. The description does not disclose any destructive behavior (e.g., irreversible ad spend) or authorization needs. It adds little beyond annotations, leaving behavioral nuances unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short with three sentences: purpose, platform list, and AI builder mention. No fluff, but the third sentence is marginally useful. Concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 8 parameters including a nested 'targeting' object and no output schema, the description is incomplete. It does not explain return values, platform-specific requirements, or required vs optional fields beyond what's in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds limited value. It mentions supported platforms, which is helpful context, but does not clarify parameter specifics like targeting format or budget constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a new advertising campaign and lists supported platforms. However, it does not explicitly differentiate from siblings like 'update_ad_campaign' or 'list_ad_campaigns', but the verb 'create' makes the purpose distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for creating campaigns across various platforms and mentions an AI Campaign Builder for optimal structures. It lacks explicit when-to-use or when-not-to-use guidance and does not compare with alternatives like 'update_ad_campaign'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_ad_creativeCreate Ad CreativeBDestructiveInspect
Create an ad creative (image, video, carousel, or text ad).
The Creative Asset agent can help generate ad visuals and copy.
| Name | Required | Description | Default |
|---|---|---|---|
| body | Yes | Ad body text | |
| name | No | Creative name | |
| adSetId | No | Ad set ID (optional) | |
| headline | Yes | Ad headline | |
| mediaUrls | No | Media URLs for the creative | |
| campaignId | Yes | Campaign ID | |
| landingUrl | No | Landing page URL | |
| callToAction | No | Call to action button text | |
| creativeType | Yes | Creative type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate write and destructive nature; description adds no further behavioral context beyond 'Create'. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main purpose. The second sentence is helpful but not essential; overall concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 9 parameters and no output schema, the description is too brief, missing return value, prerequisites (e.g., campaign existence), and side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. The description does not add additional meaning beyond listing creative types, which repeats the enum.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates ad creatives and lists supported types (image, video, carousel, text ad), distinguishing it from sibling 'create_ad_campaign'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives are given; the mention of the Creative Asset agent vaguely hints at delegation but lacks clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_brand_voiceCreate Brand VoiceBInspect
Create a brand voice profile to guide AI content generation.
Configurable fields: • tone: Overall tone (professional, casual, energetic, witty, etc.) • personality: Brand personality description • writingStyle: Writing style guidelines • doRules: Array of things the brand voice SHOULD do • dontRules: Array of things the brand voice should NEVER do • examplePosts: Example posts in this brand voice • approvedHashtags: Preferred hashtags to prioritize • bannedWords: Words to never use in content • keyPhrases: Key phrases to incorporate naturally
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name for this brand voice profile | |
| tone | No | Overall tone (e.g., professional, casual, energetic) | |
| doRules | No | Things the brand voice SHOULD do | |
| dontRules | No | Things the brand voice should NEVER do | |
| isDefault | No | Set as the default brand voice | |
| keyPhrases | No | Key phrases to incorporate | |
| bannedWords | No | Words to never use | |
| personality | No | Brand personality description | |
| examplePosts | No | Example posts in this voice | |
| writingStyle | No | Writing style guidelines | |
| approvedHashtags | No | Preferred hashtags |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a write operation (readOnlyHint=false) with potential side effects (openWorldHint=true). The description adds only 'Create' but does not specify behavior on duplicate names, whether it overwrites or fails, or any other side effects. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise overall but uses a bullet list that repeats schema information. The first sentence is front-loaded with the purpose, but the list could be more compact. No extraneous content, but also not highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters (1 required) and no output schema, the description lacks critical details like return value (presumably the created profile), error conditions, or behavior for duplicate names. It covers fields but not the overall workflow or results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes all parameters. The description restates them with slightly more context (e.g., 'Array of things the brand voice SHOULD do') but adds minimal value beyond what the schema provides, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Create a brand voice profile to guide AI content generation.' The verb 'Create' and resource 'brand voice profile' are specific and distinct from siblings like list_brand_voices and set_default_brand_voice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. It does not mention prerequisites, when to avoid using it, or how it compares to other tools like update_brand_voice (if it exists) or set_default_brand_voice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_goalCreate Product Hunt GoalADestructiveInspect
Create a new maker goal on Product Hunt.
REQUIREMENTS: • Must have Product Hunt account connected • Write access requires app whitelisting by Product Hunt • Goal title limited to 280 characters
If write access is not available, you'll receive an error with instructions to request whitelisting.
| Name | Required | Description | Default |
|---|---|---|---|
| dueAt | No | Optional due date in ISO 8601 format | |
| title | Yes | Goal title (max 280 characters) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate mutation (destructiveHint=true) and non-idempotence. The description adds valuable context: account connection, whitelisting requirement, title limit, and error behavior. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: one sentence for purpose, then bullet points for requirements. Every sentence adds value, no fluff. Properly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers prerequisites and error handling, but lacks information about the return value (e.g., goal ID). Given no output schema, this is a minor gap. Overall adequate for a create tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters (title, dueAt) with descriptions. The description only repeats the title length limit, adding no new meaning beyond the schema. With 100% coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create' and the resource 'new maker goal on Product Hunt', making the purpose unmistakable. It distinguishes from sibling tools like 'complete_goal' and 'autopilot_set_goal' by explicitly limiting to creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit prerequisites (connected account, whitelisting) and error handling instructions. However, it does not differentiate when to use this tool over alternatives like 'autopilot_set_goal', which could confuse the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_library_itemCreate Library ItemBInspect
Save content to your library as a draft, template, or evergreen content for reuse. Alias for save_to_library.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Tags for organization | |
| text | Yes | The content text/caption | |
| type | No | Content type (default: draft) | |
| title | Yes | Title for the saved content | |
| category | No | Category for organization | |
| mediaUrl | No | Media URL (image or video) | |
| targetPlatforms | No | Which platforms this content is designed for | |
| evergreenEnabled | No | Enable evergreen auto-republishing | |
| evergreenIntervalDays | No | Days between evergreen republishes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate a write operation (readOnlyHint=false). The description adds that it saves content as a specific type, which is useful, but does not disclose other behaviors like non-idempotency or potential side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences. The first sentence immediately states the purpose, and the second clarifies the alias relationship. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high schema coverage and clear purpose, the description adequately covers the tool's role. However, it lacks detail on the relationship with save_to_library (e.g., deprecation) and does not explain return values, but output schema is absent so that is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameter meanings are fully documented in the schema. The description adds no extra context about parameters, but the baseline score is 3 when schema is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: saving content to the library with specific types (draft, template, evergreen). It explicitly mentions it is an alias for save_to_library, which differentiates it from other siblings but not from its direct alias.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as save_to_library. It only notes it is an alias, leaving the agent without context on which to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_projectCreate ProjectBInspect
Create a Projects workspace for a product, app, or codebase.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| tags | No | ||
| sources | No | ||
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. | |
| category | No | ||
| description | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a mutating operation (readOnlyHint=false) and non-destructive (destructiveHint=false), but the description adds no behavioral details beyond 'create'. No mention of side effects, permissions, or rate limits. With annotations providing basic safety, description fails to add meaningful context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded. However, some additional structured detail would not harm conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters and no output schema, the description is too brief. It does not explain what a 'workspace' entails, what the return value is, or how parameters like tags or sources are used. Incomplete for effective agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 17% (one parameter described). The tool description does not elaborate on any parameters, leaving agents to infer meaning from names alone. For 6 parameters including complex types like sources, this is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create' and the resource 'Projects workspace', specifying purpose for a product, app, or codebase. It distinguishes from sibling tools like create_ad_campaign or create_goal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (creating a project workspace) but provides no guidance on when not to use or alternatives like import_project_source. No explicit exclusions or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_scheduled_reportCreate Scheduled ReportAInspect
Create a recurring analytics report that's generated and delivered automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Report name | |
| schedule | Yes | Report frequency | |
| platforms | No | Platforms to include | |
| reportType | Yes | Report type | |
| deliveryMethod | No | How to deliver the report | |
| deliveryTarget | No | Email address, Slack channel, or webhook URL |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are present (readOnlyHint=false, destructiveHint=false), so the description adds 'generated and delivered automatically' but doesn't detail side effects like immediate first run or confirmation. Acceptable but minimal extra context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 12 words, front-loaded with the core purpose. No wasted words; efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and the description omits return value (e.g., report ID). For a creation tool with 6 parameters, it lacks information on what the tool returns and how to manage the created report.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 6 parameters. The description adds general context of recurrence and auto-delivery but no per-parameter detail, meeting baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (create), the resource (recurring analytics report), and key features (generated and delivered automatically), distinguishing it from siblings like run_analytics_report or schedule_content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for recurring automatic reports but does not explicitly state when to use it vs alternatives (e.g., run_analytics_report for one-time reports) or provide any exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_upload_sessionCreate Upload SessionAInspect
Create a browser upload link for media files. ALWAYS use this when the user shares an image or video in chat — their file is local and cannot be passed directly to publish_content.
WORKFLOW:
Call this tool to get an uploadUrl
Give the user the link to open in their browser and upload their file
After upload, call get_upload_session to get the public media URL(s)
Use the returned URL with publish_content or schedule_content
Supports up to 20 files per session. Expires in 15 minutes.
| Name | Required | Description | Default |
|---|---|---|---|
| mediaType | No | Optional hint for expected media type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds beyond annotations: discloses supports up to 20 files per session, expires in 15 minutes, and outlines the full workflow including post-upload steps, which is crucial for correct usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise and well-structured: bold usage directive, numbered workflow steps, then constraints. Every sentence is necessary and adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers purpose, workflow, constraints, and prerequisite knowledge completely. Agent can use correctly without extra context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers the single optional parameter (mediaType) with enum and description. Description does not add extra meaning, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifically states it creates a browser upload link for media files, distinguishing it from siblings like upload_media by clarifying it is for local files shared in chat.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs 'ALWAYS use this when the user shares an image or video in chat' and provides a clear 4-step workflow, including that files cannot be passed directly to publish_content.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_workflowCreate WorkflowAInspect
Create a new automation workflow.
Workflows consist of a trigger and a sequence of steps.
Trigger types: • manual - Run on demand • schedule - Cron-based schedule • event - Triggered by SendIt events (post published, mention detected, etc.) • webhook - Triggered by external webhook • connector_event - Triggered by connector-specific events
Step types: • connector_action - Execute a connector operation • connector_operation - Alias of connector_action • agent_invoke - Run an AI agent • condition - Conditional control step (skip subsequent steps) • delay - Wait for a duration • http_request - Outbound HTTP request • transform - Transform data between steps • notify - Send notification (email, Slack, etc.)
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Workflow name | |
| steps | Yes | Workflow steps in execution order | |
| active | No | Activate workflow immediately (default: false) | |
| description | No | Workflow description | |
| triggerType | Yes | Trigger type | |
| triggerConfig | Yes | Trigger configuration (cron expression, event type, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=false, which agree with the 'create' action. The description adds value by detailing trigger and step types, which annotations do not cover. However, no additional behavioral traits like permissions or side effects are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the purpose and uses bullet-point lists for triggers and steps. It is fairly concise for the amount of information, though the lists could be shortened if schema were more descriptive.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given full schema coverage and no output schema, the description adequately covers the key aspects: workflow creation, triggers, and step types. It is complete enough for an agent to understand how to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, but the description enriches this by listing trigger types with brief explanations and step types with descriptions. This adds meaning beyond the schema's simple enum and generic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create a new automation workflow' and explains its composition. This verb+resource pattern distinguishes it from sibling tools like delete_workflow, update_workflow, or trigger_workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives. It provides no guidance on when not to use it or what other tools exist for different purposes. Usage is implied by the name and context, but not articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
critique_postCritique PostAInspect
Score and critique post content with AI-powered suggestions. Returns a quality score (0-100), breakdown, and improvement tips.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Post text to critique (required) | |
| mediaUrl | No | Optional media URL for media score | |
| platforms | Yes | Target platforms for scoring context (required) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=false, but the description does not clarify any potential side effects (e.g., logging, updating state). It adds context about AI-powered suggestions and the return format, but no behavioral traits beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the key action and output. No redundant or unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with no output schema, the description is mostly complete: it explains what the tool returns (score, breakdown, tips). However, it lacks details on the breakdown structure or how improvement tips are presented, which would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes all parameters. The description adds no additional meaning beyond 'AI-powered suggestions'. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool scores and critiques post content with AI suggestions, specifying the return values of a quality score (0-100), breakdown, and improvement tips. It effectively distinguishes from siblings like 'score_content' and 'validate_content' by focusing on both scoring and critique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'score_content', 'validate_content', or 'analyze_project'. It lacks prerequisites, when-not-to-use conditions, or links to other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_library_itemDelete Library ItemBDestructiveInspect
Delete a content library item.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The content library item ID to delete | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide destructiveHint=true and readOnlyHint=false. The description adds no additional behavioral context beyond stating the action. It does not contradict annotations but does not elaborate on side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no wasted words. However, it could be slightly expanded to include useful context without harming conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is minimal and does not address scope (team vs personal) or permanence. For a destructive tool with many siblings, more context is needed to ensure correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters (id, team_id) already described clearly. The description adds no new meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a content library item' uses a specific verb ('Delete') and resource ('content library item'), clearly distinguishing it from siblings like create_library_item, update_library_item, get_library_item, and list_library.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. No mention of prerequisites (e.g., needing the item ID), consequences (permanent deletion), or when not to use (e.g., if item is in use).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_postDelete Published PostADestructiveInspect
Delete a published post from a platform. Supports: X, Facebook, LinkedIn, Threads, YouTube, Pinterest, Bluesky, Mastodon, Telegram.
Provide the post ID (database ID returned when published through SendIt), the platform post ID, or the post URL.
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | Post identifier: database ID, platform post ID, or post URL | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| platform | Yes | Platform to delete from |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true and readOnlyHint=false, so the description's 'Delete' aligns. The description adds supported platforms and identifier types but does not disclose irreversibility, permission requirements, or side effects beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states action and platforms, second details identifier options. No redundant or superfluous information; front-loaded with critical info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool has no output schema, the description lacks information on success/error responses, confirmation of deletion, or behavior when post doesn't exist. For a destructive tool with openWorldHint=true, additional context about effects on analytics or irreversibility would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description essentially restates the schema's parameter info (postId types, platform list) without adding new semantics or usage details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The name 'delete_post' and title 'Delete Published Post' clearly state the action and resource. The description explicitly says 'Delete a published post' and lists supported platforms, distinguishing it from sibling tools like delete_scheduled_post (which deletes scheduled posts) and edit_post (which modifies).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description details acceptable identifier formats (database ID, platform post ID, URL), providing usage context. However, it does not explicitly guide when to use this tool versus alternatives like delete_scheduled_post or when not to use it (e.g., for posts that are not published).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_scheduled_postDelete Scheduled PostADestructiveIdempotentInspect
Cancel a scheduled post before it's published.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| scheduleId | Yes | Schedule ID to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide destructiveHint=true and idempotentHint=true. The description adds valuable behavioral context by noting the time constraint (before publication), which implies failure if post is already published. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys the essential action and constraint without any fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple deletion tool, the description covers purpose and timing constraint. While it could mention error cases (e.g., if post already published or scheduleId invalid), the schema's required field and annotations provide sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with detailed descriptions for both parameters (scheduleId, team_id). The description does not add any additional parameter meaning, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'cancel' with the clear resource 'scheduled post', and includes a temporal constraint 'before it's published'. This distinctly differentiates from sibling tools like 'edit_scheduled_post' or 'trigger_scheduled_post'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description clearly states when to use (to cancel a scheduled post before publication) but does not explicitly mention when not to use or suggest alternatives. Given the simplicity of the action, the implied context is adequate but lacks exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_workflowDelete WorkflowADestructiveInspect
Delete a workflow permanently.
| Name | Required | Description | Default |
|---|---|---|---|
| workflowId | Yes | Workflow ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true. Description adds 'permanently' confirming irreversibility, but no additional behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded verb, no waste. Efficiently communicates the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one parameter and no output schema. Description is sufficient but could mention that deletion is irreversible and may affect associated data. Still, adequate for the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter described as 'Workflow ID'. The description does not add extra meaning; baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete) and resource (workflow) with 'permanently' emphasizing irreversibility. It distinguishes itself from sibling tools like create_workflow, update_workflow, trigger_workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. No mention of prerequisites or caveats (e.g., ensuring user intends permanent deletion).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
disconnect_connectorDisconnect a ConnectorADestructiveInspect
Disconnect and revoke credentials for a connector.
| Name | Required | Description | Default |
|---|---|---|---|
| connectorId | Yes | Connector ID to disconnect | |
| credentialId | No | Specific credential ID to revoke (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare destructiveHint=true, indicating mutation. The description reinforces this by saying 'revoke credentials' but adds no new behavioral details beyond what annotations provide. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is extremely concise and front-loaded with the core action. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple destructive tool with two parameters and no output schema, the description is complete enough. Could mention irreversibility or permission requirements, but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds no extra meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Disconnect and revoke credentials for a connector' clearly states the action (disconnect and revoke) and the resource (connector). It distinguishes from sibling connect tools and other connector operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While there are no direct sibling tools for disconnecting, the description lacks context on prerequisites (e.g., connector must be connected) or post-conditions. A minimal viable score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
draft_replyDraft ReplyAInspect
Generate an AI-powered reply draft for a social mention. Returns a draft with tone and safety flags.
| Name | Required | Description | Default |
|---|---|---|---|
| tone | No | Reply tone: friendly, professional, casual | |
| max_length | No | Maximum reply length in characters (default: 280) | |
| mention_id | Yes | The ID of the mention to reply to (required) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false and openWorldHint=true, indicating the tool may have side effects, but the description does not clarify if the draft is saved persistently or simply computed and returned. It adds context about safety flags but lacks detail on idempotency or mutation behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that captures the core purpose and output. It is concise and free of superfluous information, earning its place with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 params, no nested objects, no output schema), the description covers the basic purpose and output. However, it omits details such as what 'safety flags' entail, whether the draft is saved, and how it integrates with the mention ecosystem, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes all three parameters with clear descriptions and default values. The tool description adds no additional meaning beyond mentioning the return value, which is not a parameter concern. With 100% schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates an AI-powered reply draft for a social mention and specifies the output includes tone and safety flags. This verb-resource combination effectively distinguishes it from sibling tools like reply_to_comment or reply_to_conversation, which likely handle direct posting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as reply_to_comment or critique_post. It does not mention prerequisites, limitations, or when a draft is preferable to a direct reply.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
duplicate_scheduled_postDuplicate Scheduled PostAInspect
Create a new scheduled post by duplicating an existing scheduled post. If no time is supplied, SendIt shifts it forward automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for postId. | |
| postId | No | Scheduled post ID. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| scheduleId | No | Compatibility alias for postId. | |
| scheduledAt | No | Optional ISO 8601 time for the duplicated post. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a write operation (readOnlyHint=false) and non-destructive (destructiveHint=false). Description adds automatic time shifting behavior, but does not disclose other side effects (e.g., whether the original post remains, if it triggers approvals). Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, directly to the point with no wasted words. Essential information is front-loaded: action, resource, and key behavior regarding time.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and description does not mention return value or what happens upon creation. Also lacks context about team_id usage. For a creation tool with 5 parameters, this is incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so schema already documents parameters. Description adds minimal context by mentioning time shifting, but does not clarify the anyOf requirement (aliases for postId). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Create a new scheduled post by duplicating an existing scheduled post', providing specific verb and resource. It distinguishes from siblings like 'edit_scheduled_post' or 'create_scheduled_post' by emphasizing duplication and automatic time shifting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description includes guideline about automatic time shifting when no time is supplied, but does not explicitly compare to alternatives like 'edit_scheduled_post' or 'schedule_content'. Lacks exclusions or when-to-use vs not use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
edit_postEdit Published PostAIdempotentInspect
Edit a published post on a supported platform. Updates the text/caption of an already published post.
SUPPORTED: YouTube (title/description/tags), LinkedIn, Facebook, Mastodon, Telegram, Bluesky NOT SUPPORTED: X, Threads, Instagram, Pinterest, TikTok
Requires the published post ID returned when the post was originally published through SendIt.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | New tags (YouTube only) | |
| text | No | New text/caption/commentary for the post | |
| title | No | New title (YouTube only) | |
| postId | Yes | Published post ID (database ID returned when the post was published) | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| description | No | New description (YouTube only) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=false, idempotentHint=true) indicate modification but non-destructive and idempotent behavior. The description adds that YouTube updates include title, description, tags, but doesn't discuss potential side effects like platform-specific limits or response format. With annotations present, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (5 lines), front-loaded with main purpose, and uses clear sections (SUPPORTED/NOT SUPPORTED, requirement). Every sentence adds value without unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-param tool with full schema coverage and no output schema, the description covers the core use case, platform support, and prerequisites. It doesn't explain return values, but that's acceptable given the operation. Slight gap is the lack of info on what happens if the post is not found or editing fails.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description adds minimal new information. It reiterates that text/caption is updated and that YouTube params are YouTube-only, which is already in schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it edits a published post, updating text/caption. It lists supported and unsupported platforms, distinguishing it from siblings like delete_post or edit_scheduled_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies when to use (for published posts) and provides a list of supported vs unsupported platforms. It also notes the requirement for the original publish ID. However, it doesn't explicitly mention alternatives like edit_scheduled_post for scheduled posts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
edit_scheduled_postEdit Scheduled PostAIdempotentInspect
Edit a pending scheduled post. Supports text, media, platform, timing, approval-scope, and publish-now updates.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for postId. | |
| text | No | Updated post text. | |
| postId | No | Scheduled post ID. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| mediaUrl | No | Replacement public HTTPS media URL, or an empty value to remove media. | |
| timezone | No | Optional IANA timezone for the updated schedule. | |
| mediaUrls | No | Replacement public HTTPS media URLs. | |
| platforms | No | Optional replacement platform list. | |
| applyScope | No | Whether to apply the edit to only this post or this and future occurrences. | |
| publishNow | No | If true, publish immediately after applying the edit. | |
| scheduleId | No | Compatibility alias for postId. | |
| scheduledAt | No | Compatibility alias for scheduledTime. | |
| firstComment | No | Optional replacement Instagram first comment. | |
| scheduledTime | No | Updated ISO 8601 scheduled publish time. | |
| pinterestBoardId | No | Optional replacement Pinterest board ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context about the types of updates supported, which goes beyond the annotations (readOnlyHint=false, idempotentHint=true). However, it does not disclose potential side effects (e.g., triggering approvals, constraints on post state) or the implications of openWorldHint=true. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 14 words with no wasted words. It efficiently communicates the tool's purpose and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 15 parameters and no output schema, the description provides only a high-level list of supported updates. It lacks information on return values, error conditions, the effect of 'applyScope', and behavioral nuances for complex changes. The description is insufficient for a tool with this many options.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description maps its listed update types (text, media, etc.) to parameters, providing some organizational context. However, it does not explain parameter details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'edit' and the resource 'pending scheduled post', listing the specific aspects it supports (text, media, platform, timing, approval-scope, publish-now). This effectively distinguishes it from sibling tools like delete_scheduled_post or edit_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for pending scheduled posts but does not explicitly state when to use this tool versus alternatives like edit_post (which may be for published posts) or when not to use it. No prerequisites or context are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
escalate_to_supportEscalate to SupportAInspect
Escalate a social conversation to a connected support platform (Zendesk, Intercom, etc.).
Creates a support ticket from the conversation context.
| Name | Required | Description | Default |
|---|---|---|---|
| note | No | Internal note for the support team | |
| priority | No | ||
| conversationId | Yes | Conversation to escalate | |
| targetConnector | Yes | Support platform to escalate to |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate the tool is mutating (readOnlyHint=false) and not destructive (destructiveHint=false). The description adds that it creates a ticket, but does not disclose whether the conversation is modified, closed, or other side effects. Given the annotations, the description provides marginal extra value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loading the primary action. However, minor formatting (closing parenthesis 'etc.).') could be improved, and it could be more structured, but overall it is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 4 parameters and no output schema, yet the description does not explain what the agent receives after the call (e.g., ticket ID or status). It also omits context about requiring a connected platform or conversation existence. Completeness is adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 75% of parameters with descriptions. The tool description does not add any additional meaning beyond what is already in the schema. For high coverage, baseline is 3, and no extra value is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('escalate') and resource ('social conversation to a connected support platform'), and distinguishes it from sibling tools by specifying a unique purpose. It explicitly mentions creating a support ticket, leaving no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a conversation needs to be escalated to a support platform, but lacks explicit guidance on when to use this tool versus alternatives (e.g., reply_to_conversation, update_conversation) and does not mention prerequisites or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
execute_connector_operationExecute Connector OperationAInspect
Execute a specific operation on a connected connector.
Use get_connector_capabilities to discover available operations. Operations include read/write actions specific to each connector.
| Name | Required | Description | Default |
|---|---|---|---|
| data | No | Operation-specific input data | |
| operation | Yes | Operation name (e.g., 'publish', 'read_posts', 'campaigns.list') | |
| connectorId | Yes | Connector ID | |
| idempotencyKey | No | Idempotency key to prevent duplicate operations (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false and destructiveHint=false, so the description adds that operations include read/write actions, but does not elaborate on side effects, idempotency behavior (though idempotencyKey parameter exists), or error handling. It provides moderate additional context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loaded with the main action, and every sentence adds value. No redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool has 4 parameters and no output schema, the description does not explain the return value structure or how the 'data' parameter is used. It mentions operations are connector-specific, but for a generic execution tool, more detail on expected outcomes would improve completeness. Still, it provides minimal viable context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 4 parameters, so the schema already documents them. The description only adds a general note about discovering operations (related to the 'operation' parameter) and does not provide additional parameter-specific meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Execute' and resource 'a specific operation on a connected connector', and distinguishes from siblings by referencing get_connector_capabilities for discovering operations. It also notes that operations include read/write actions, reinforcing its specific purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly guides to 'Use get_connector_capabilities to discover available operations' before executing. It implies the connector must already be connected, but does not explicitly state when not to use this tool or alternatives. Still, it provides clear context for appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_ai_mediaGenerate AI MediaAInspect
Generate AI images or videos using approved media providers.
Supported providers:
heygen-mcp: HeyGen Direct API or MCP video/avatar generation
codex-oauth-image: Codex OAuth image generation for gpt-image-2
Returns a job ID that can be polled for status.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Creative prompt for generation | |
| provider | Yes | AI media provider | |
| media_type | No | Type of media to generate (default: video) | |
| parameters | No | Provider-specific parameters (e.g. duration, resolution, style, aspectRatio) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the async nature by mentioning the job ID for polling. Annotations are consistent (readOnlyHint false, destructiveHint false). No contradictions. It could mention potential failures or costs but is still good.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded. It explains the core function, lists providers clearly, and ends with the output behavior. No unnecessary sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 params, nested object, no output schema), the description covers the main purpose, async polling, and providers. It lacks guidance on provider-specific parameters but is otherwise complete for usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description lists providers and says returns job ID but adds no details beyond the schema's parameter descriptions. The 'parameters' object is noted as provider-specific, but no examples are given.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates AI images or videos using approved providers and returns a job ID for polling. It distinguishes itself from siblings like check_ai_media_status (polling) and upload_media (manual upload).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists supported providers and notes the returned job ID, but does not explicitly state when to use this tool versus alternatives like generate_content or upload_media. The guidelines are implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_contentGenerate AI ContentARead-onlyIdempotentInspect
Generate AI-powered platform-optimized content without publishing.
Uses AI to create platform-specific text, hashtags, and titles from a prompt or media URL. Respects brand voice profiles if configured.
Returns generated content variants for each target platform. Use publish_content to publish the generated content, or publish_ai to generate and publish in one step.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | No | Creative prompt or topic for content generation | |
| hashtags | No | Hashtag mode | |
| mediaUrl | No | Media URL to analyze for content generation | |
| strictAi | No | If true, fail instead of using fallback templates when AI is unavailable | |
| platforms | Yes | Target platforms to generate content for | |
| generation | No | Generation parameters for tone, style, and CTA | |
| contentOverrides | No | Manual overrides for generated content |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds that it does not publish, respects brand voice profiles, and returns variants. No contradiction, and adds useful context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five concise sentences, front-loaded with primary function. Each sentence earns its place: purpose, what it generates, brand voice, return type, and usage guidelines. No redundancy or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main purpose, what it generates, return type, sibling tools, and brand voice. No output schema, but description states output is content variants per platform. Missing explicit mention of error handling or limitations, but adequate for complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for all 7 parameters. The description reinforces the tool's purpose but does not add significant new meaning beyond what's already in the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool generates AI content without publishing, specifies it creates platform-specific text/hashtags/titles from prompt or media URL, and differentiates from sibling tools like publish_content and publish_ai.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use alternatives: 'Use publish_content to publish the generated content, or publish_ai to generate and publish in one step.' Provides clear context for when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_post_bundleGenerate Post BundleAInspect
Generate multi-variant AI content with quality scoring for multiple platforms. Returns the best variant plus alternatives.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Content topic or instructions (required) | |
| platforms | Yes | Target platforms (required) | |
| generation | No | Generation parameters (tone, style, CTA) | |
| variant_count | No | Number of variants to generate (1-5, default: 3) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds 'AI content' and 'quality scoring' behavior beyond annotations. Annotations indicate readOnlyHint=false and openWorldHint=true, but no contradictions. The description does not detail side effects or prerequisites, but annotations already set expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single informative sentence. It is concise and front-loads the key action and result, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description explains the return (best variant plus alternatives). However, for a tool with multiple platforms and nested parameters, more detail on return structure or usage examples would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already explains all parameters. The description adds no new information about parameter meaning or usage beyond what is in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates multi-variant AI content with quality scoring for multiple platforms, and returns the best variant plus alternatives. This distinguishes it from siblings like generate_content or critique_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for multi-variant content with quality scoring but does not explicitly state when to use this tool versus alternatives (e.g., generate_content for single variants). No when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_project_strategyGenerate Project StrategyBInspect
Generate project-local brand voice, positioning, audience, and social content strategy.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. | |
| projectId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (non-read-only, open-world, non-destructive), the description adds minimal behavioral context—no mention of side effects, auth requirements, or idempotency—so it carries little extra value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words, though it could benefit from additional context without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description does not mention what the tool returns (e.g., a strategy plan or a success message) and lacks any guidance on expected output or behavior, leaving the agent underinformed for a strategy generation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema coverage at 50%, the description does not explain the purpose of projectId or the optional team_id beyond what the schema provides, failing to compensate for the missing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates project-local strategy for brand voice, positioning, audience, and social content, which distinguishes it from siblings like generate_content or analyze_project.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for project-specific strategy generation but does not explicitly state when to use it versus alternatives like create_brand_voice or generate_content, nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ad_performanceGet Ad PerformanceARead-onlyInspect
Get performance metrics for campaigns, ad sets, or creatives.
Returns impressions, clicks, spend, conversions, CTR, CPC, CPM, ROAS, and custom metrics. Supports date range filtering and breakdown by day.
| Name | Required | Description | Default |
|---|---|---|---|
| endDate | No | End date (YYYY-MM-DD) | |
| metrics | No | Specific metrics to retrieve (optional, returns all by default) | |
| entityId | Yes | Entity ID | |
| breakdown | No | Breakdown dimension | |
| startDate | No | Start date (YYYY-MM-DD) | |
| entityType | Yes | Entity type to get metrics for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive behavior. The description adds useful behavioral details such as the list of returned metrics and supported breakdown dimensions, enhancing transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using two short paragraphs to convey essential information. It is efficient and front-loaded with the main purpose, but could be slightly more structured by grouping related features.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only analytics tool with no output schema, the description adequately covers the main behavior, entity types, metrics, and filtering options. It omits pagination or limits but is sufficient for typical usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds marginal value by mentioning date range filtering and breakdown by day, which aligns with startDate, endDate, and breakdown parameters, but does not provide significant additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves performance metrics for campaigns, ad sets, or creatives, listing specific metrics and filtering options. It is specific verb+resource but does not explicitly differentiate from sibling analytics tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating supported features (date range, breakdown) but lacks explicit when-to-use or when-not-to-use guidance relative to sibling tools like get_analytics or get_unified_ad_report.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_policiesGet Agent PoliciesARead-onlyInspect
Get the active policies/guardrails for an agent.
| Name | Required | Description | Default |
|---|---|---|---|
| agentId | Yes | Agent to get policies for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description doesn't need to repeat safety. It adds no behavioral context beyond stating it retrieves 'active policies'. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence with no unnecessary words. Front-loads the key information effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, read-only, no output schema), the description is complete. It tells the agent exactly what it does and what is needed for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage and already describes agentId fully with enum. Description adds no extra meaning or format details beyond what is in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get the active policies/guardrails for an agent', specifying the verb 'get' and the resource 'policies/guardrails'. This distinguishes it from sibling 'update_agent_policy' which modifies policies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. The agent must infer from the name and description that it's for reading policies, but there is no 'when not to use' or mention of alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_runGet Agent Run DetailsARead-onlyInspect
Get the details and output of a specific agent run.
| Name | Required | Description | Default |
|---|---|---|---|
| runId | Yes | Agent run ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already confirm this is a read-only, non-destructive operation. The description adds modest value by stating it returns 'details and output,' but lacks details on authorization needs, error handling (e.g., invalid runId), or rate limits. For a simple read tool, this is adequate but not exceptional.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no unnecessary words. It directly states the tool's purpose without repetition or fluff, achieving an optimal size for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one required parameter and read-only annotations (no output schema), the description is largely complete. It indicates the tool returns 'details and output,' though more specifics (e.g., run status, steps) could improve completeness. The context signals show low complexity, so minor gaps are acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides 100% coverage for the single parameter (`runId` with description 'Agent run ID'). The description does not add any additional meaning or context beyond what the schema already conveys. With high schema coverage, a score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and the resource ('details and output of a specific agent run'). It effectively distinguishes this tool from siblings like 'list_agent_runs' (which lists runs) and 'get_workflow_run' (a different resource).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies this tool is for retrieving a specific run, it provides no explicit guidance on when to use it versus alternatives (e.g., 'list_agent_runs' to browse or 'get_workflow_run' for different resources). No exclusion criteria or context for when not to use are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_analyticsGet AnalyticsARead-onlyIdempotentInspect
Get engagement analytics for a platform. Facebook and TikTok can include account-wide posts not published through SendIt; unresolved TikTok inbox-draft deliveries may appear as placeholders until SendIt can link them.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Optional maximum posts to return. Facebook feed scanning is capped at 1,000 posts. | |
| endDate | No | Optional end date (YYYY-MM-DD). | |
| team_id | No | Team ID to get team analytics. If omitted, gets personal analytics. | |
| platform | Yes | Platform to get analytics for | |
| startDate | No | Optional start date (YYYY-MM-DD). For Facebook, omit this to scan up to 2 years of Page feed history. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so description does not need to restate safety. It adds value by disclosing TikTok-specific behavior (placeholders for unresolved inbox drafts) and default scope (account-wide public posts).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first is front-loaded with main purpose, second adds a platform-specific caveat. No redundant or unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description should clarify what 'engagement analytics' includes (e.g., metrics like likes, comments, shares) and what 'recent posts' means (time window). Currently it is vague, leaving the agent to infer the output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description provides minimal extra parameter insight beyond the schema, only hinting at TikTok behavior affecting parameter usage (platform=tiktok). No syntax or format details added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves engagement analytics for recent posts and adds a TikTok-specific nuance. However, among sibling analytic tools (e.g., get_post_analytics, get_unified_analytics), it does not explicitly differentiate its scope or usage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage for recent post engagement analytics but no explicit when-to-use or when-not-to-use guidance. No alternatives mentioned despite many sibling analytic tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_anomaly_alertsGet Anomaly AlertsARead-onlyInspect
Get recent anomaly alerts detected across metrics.
Includes sudden drops/spikes in engagement, unusual spend patterns, and content performance outliers.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20) | |
| severity | No | Filter by severity (optional) | |
| dismissed | No | Include dismissed alerts |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, and the description adds concrete behavioral context (types of anomalies detected). No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. Front-loaded with the core action and examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only list tool with no output schema, the description provides sufficient context about the kind of data returned. Could mention pagination or fields, but acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All parameters have descriptions in the schema (100% coverage). The description does not add further parameter details, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves 'recent anomaly alerts' and gives concrete examples like 'sudden drops/spikes in engagement', which distinguishes it from other analytics tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit instructions on when to use this tool versus alternatives (e.g., other analytics tools). The context is implied but not differentiated from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_approval_commentsGet Approval CommentsARead-onlyIdempotentInspect
List approval comments for one scheduled post in team approval scope.
| Name | Required | Description | Default |
|---|---|---|---|
| postId | No | Scheduled post ID to load comments for. | |
| post_id | No | Compatibility alias for postId. | |
| team_id | Yes | Team ID is required because approval comments are only available in team scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds scope context but no additional behavioral details like pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, no fluff, fully front-loaded. It efficiently conveys the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list operation with well-documented parameters and clear annotations, the description is sufficiently complete though it omits return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for each parameter. The description adds no new info beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists approval comments for one scheduled post in team scope, which distinguishes it from siblings like list_pending_approvals or approve_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving comments on a specific post but does not explicitly state when to use this tool vs. alternatives like get_scheduled_post.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_attribution_reportGet Attribution ReportARead-onlyInspect
Generate an attribution report showing how channels contribute to conversions.
Models: • first_touch - All credit to first interaction • last_touch - All credit to last interaction • linear - Equal credit across all touchpoints • time_decay - More credit to recent touchpoints • position_based - 40% first, 40% last, 20% middle • data_driven - ML-based attribution
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | Attribution model | |
| endDate | Yes | ||
| startDate | Yes | ||
| conversionType | No | Filter by conversion type (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the safety profile is clear. The description adds value by listing model options but does not disclose other behavioral aspects like output format or date range handling. With annotations covering the basic behavior, the description is adequate but not exceptional.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a clear first sentence stating purpose, followed by a bullet list of models. It avoids redundancy but could be slightly more structured by briefly mentioning other parameters or output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the schema coverage of 50%, no output schema, and simple annotations, the description could be more complete. It covers the core attribution model concept well but lacks details on expected output, date format, or optional conversion type filter. This is moderately insufficient for full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 50% coverage (only model has a description). The description compensates by explaining each model option in detail, which adds significant meaning beyond the enum values. However, parameters like startDate, endDate, and conversionType are not explained, leaving gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates an attribution report showing how channels contribute to conversions, which is specific and distinct from sibling report tools like get_analytics. The verb 'generate' and resource 'attribution report' are precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as get_analytics or get_unified_analytics. The description focuses on what it does but does not indicate use cases or when to avoid it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_audit_logGet Audit LogARead-onlyIdempotentInspect
Retrieve audit log entries for your account.
Shows a timestamped trail of all actions performed: publishes, schedules, account connections, content library changes, etc.
Filter by action type, resource type, or date range.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max entries to return (default 50, max 200) | |
| action | No | Filter by action (e.g. 'post.published', 'account.connected') | |
| end_date | No | End date (YYYY-MM-DD) | |
| start_date | No | Start date (YYYY-MM-DD) | |
| resource_type | No | Filter by resource type (e.g. 'post', 'account', 'keyword') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds context on the data returned (actions trail) and filtering, complementing annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short sentences, front-loading the main action and efficiently covering key details without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While no output schema exists, the description sufficiently explains the tool's output (timestamped action trail) and filtering options. It provides adequate context for an agent to understand usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameters are well-described in the schema. The description reiterates filtering by action type, resource type, or date range, but adds minimal new semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Retrieve audit log entries for your account' and elaborates with specific examples of what the entries contain (publishes, schedules, etc.). It clearly identifies the tool's purpose and distinguishes it from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool provides a timestamped trail of actions and lists filtering options, implying usage for reviewing account activity. However, it does not explicitly exclude scenarios or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_benchmark_comparisonGet Benchmark ComparisonARead-onlyInspect
Compare your performance against industry benchmarks.
Returns how your engagement rates, growth, and content performance compare to similar accounts in your industry.
| Name | Required | Description | Default |
|---|---|---|---|
| metric | Yes | Metric to compare | |
| industry | No | Industry vertical for comparison | |
| platform | Yes | Platform to benchmark |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already include readOnlyHint=true and destructiveHint=false, so the safety profile is clear. The description adds that the tool returns comparison data, which is helpful but does not disclose further behavioral traits such as permissions required or if it applies to a specific account.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences. The first sentence states the primary action, and the second provides relevant detail. No extraneous words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations and schema coverage, the description is mostly complete. It explains what the tool does and returns, but lacks an explicit statement that the comparison is for the current account or that platform and industry are required (though schema covers required fields). No output schema, but the description suffices.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions, so the baseline is 3. The description adds context by listing example metrics (matching the metric enum), but does not explicitly cover platform or industry parameters. Additional meaning beyond the schema is marginal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Compare your performance against industry benchmarks.' It uses a specific verb ('compare') and resource ('benchmarks'), and the second sentence enumerates concrete metrics (engagement rates, growth, content performance), which differentiates it from sibling tools like get_analytics that provide general data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for benchmarking but does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention exclusions or prerequisites. There is no 'when not to use' advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_best_timesGet Best Times to PostARead-onlyIdempotentInspect
Get the optimal posting times for a platform based on your historical engagement data.
Returns top time slots ranked by engagement score. Falls back to industry defaults when insufficient personal data exists.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of time slots to return (default 5, max 20) | |
| platform | Yes | Platform to get best times for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint, idempotentHint, non-destructive. The description adds the fallback behavior, which is consistent with the read-only nature. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. Front-loaded with the core purpose, then additional detail about fallback. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and fallback but does not detail the output structure (e.g., each time slot has a time and score). Given no output schema, this is a minor gap for an agent using the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with clear descriptions for limit and platform. The description does not add further parameter-level information beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets optimal posting times for a platform using historical engagement data, with fallback to defaults. This is distinct from siblings like get_analytics or suggest_next_schedule_time.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool uses historical data and falls back to defaults when insufficient. It implies when to use it but does not explicitly state when not to use it or name alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_calendar_gapsGet Calendar GapsARead-onlyIdempotentInspect
Detect open days in the publishing calendar and return gap metadata for planning.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | Range end date in YYYY-MM-DD format. | |
| start | Yes | Range start date in YYYY-MM-DD format. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| platforms | No | Optional platform subset to inspect. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds 'open days' and 'gap metadata' context but no additional behavioral traits beyond these annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no wasted words. It is appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description could be more specific about what 'gap metadata' includes. However, it is adequate given the tool's straightforward nature and strong annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with all parameters described. The description does not add meaning beyond the schema, so baseline score 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('detect') and resource ('open days in the publishing calendar') with a clear purpose ('for planning'). This distinguishes it from sibling tools like get_calendar_recommendations or get_best_times.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for planning but does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_calendar_recommendationsGet Calendar RecommendationsARead-onlyIdempotentInspect
Return top recommended recurring posting slots from historical performance data.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| timezone | No | Optional IANA timezone for the recommendation labels. | |
| platforms | No | Optional platform subset to optimize for. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, non-destructive behavior. Description adds that it uses historical performance data but no further behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence that front-loads the purpose; no wasted words but could be expanded slightly for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and description does not explain format or limits of recommendations; adequate given low complexity and good annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema provides 100% coverage with descriptions for all 3 parameters; description adds no additional meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns top recommended recurring posting slots from historical performance data, distinguishing it from siblings like get_best_times.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use for data-driven recommendations but does not explicitly state when to use this tool versus alternatives like get_best_times or suggest_next_schedule_time.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_connector_capabilitiesGet Connector CapabilitiesARead-onlyInspect
Get detailed capabilities and operations for a specific connector.
Returns supported operations, auth requirements, rate limits, and constraints.
| Name | Required | Description | Default |
|---|---|---|---|
| connectorId | Yes | Connector ID (e.g., 'linkedin', 'meta_ads', 'slack') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds value by detailing what is returned (operations, auth, rate limits, constraints), providing behavioral context beyond the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the purpose and immediately state the return value. No extraneous information; each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the minimal schema, no output schema, and presence of annotations, the description provides sufficient context about the return content. It could mention error handling or that connectorId must be from a known list, but overall it is adequately complete for a read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter (connectorId) described with examples. The description does not add further meaning beyond the schema. Baseline 3 applies as schema covers the parameter adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves detailed capabilities/operations for a specific connector, listing supported operations, auth, rate limits, and constraints. It distinguishes from siblings like 'get_connector_health' (health status) and 'list_connectors' (listing connectors).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need connector capabilities, but no explicit guidance on when to use this vs. alternatives like 'get_connector_health' or 'get_platform_requirements'. No when-not-to-use or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_connector_healthGet Connector HealthARead-onlyInspect
Get health status and SLO metrics for a connector.
| Name | Required | Description | Default |
|---|---|---|---|
| connectorId | Yes | Connector ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark it as a read-only, non-destructive operation. The description adds value by specifying the exact output: health status and SLO metrics, which is not defined in annotations or schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that efficiently conveys the tool's purpose without any unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool with one parameter and no output schema, the description adequately explains what the tool returns. It could optionally mention the structure of the response, but it's sufficient for a health check.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes the single parameter ('Connector ID') with 100% coverage. The description does not add any additional semantics or formatting details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves health status and SLO metrics for a connector, which is a specific verb-resource pair. It distinguishes itself from siblings like 'get_connector_capabilities' and 'list_connectors'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool vs alternatives (e.g., when to use 'get_connector_capabilities' instead). No context or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_conversationGet ConversationARead-onlyInspect
Get full conversation thread with all messages.
| Name | Required | Description | Default |
|---|---|---|---|
| conversationId | Yes | Conversation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, informing the agent of safety. The description does not add further behavioral details (e.g., pagination, performance, or authorization needs). No contradiction found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded with clear purpose. No filler words; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one parameter, the description is adequate. It omits details about what constitutes a 'full conversation thread' (e.g., includes replies or attachments), but is sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter fully documented. The description adds no extra meaning beyond what the schema provides, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a full conversation thread including all messages, using specific verb 'Get' and resource 'conversation thread'. It distinguishes from siblings like list_conversations which only lists metadata, and reply_to_conversation which is for replying.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., list_conversations, get_inbox_summary). The description does not mention prerequisites, limitations, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_inbox_summaryGet Inbox SummaryARead-onlyInspect
Get a summary of the unified inbox: total conversations, unread count, sentiment breakdown, and response time metrics.
| Name | Required | Description | Default |
|---|---|---|---|
| timeRange | No | Time range for summary metrics |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false. Description adds value by naming the exact metrics included in the summary, providing behavioral context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 15 words, front-loaded with the action. Every word is necessary and informative with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one optional parameter, the description adequately lists returned metrics. It could clarify scope (e.g., 'for the current user') but is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, fully describing the single parameter timeRange with enum values. Description does not add extra meaning beyond what schema already provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves a summary of the unified inbox, listing specific metrics (total conversations, unread count, sentiment, response time). This distinguishes it from list_inbox (list conversations) and get_conversation (single conversation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (getting aggregate overview) but does not explicitly state when not to use or mention alternatives like list_inbox. However, the purpose is clear enough for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_library_itemGet Library ItemBRead-onlyIdempotentInspect
Get a specific content library item by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The content library item ID | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, which cover safety and behavior. The description adds no further behavioral details (e.g., what is returned, any side effects). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence. It is front-loaded and contains no extraneous information. However, it could be slightly more detailed without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get-by-ID operation with good annotations and full schema, the description is adequate but minimal. It omits details like what fields are returned or any permissions required, but these are not critical given the annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage, describing both 'id' and 'team_id' with their types and purposes. The description does not add any extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool retrieves a specific content library item by ID. The verb 'get' and resource 'library item' are specific. It distinguishes from sibling 'list_library' which retrieves multiple items, but does not explicitly mention differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidelines provided. The description does not explain when to use this tool versus siblings like 'list_library' or 'update_library_item', nor does it suggest prerequisites or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_platform_requirementsGet Platform RequirementsARead-onlyIdempotentInspect
Get detailed content requirements for a platform.
Returns: Character limits, media specifications, rate limits, and special notes.
Call this when you need specifics like exact character counts, file size limits, or supported formats. The publish_content description has a quick reference, but this tool provides complete details.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | Platform to get requirements for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive. Description adds context on what is returned, such as rate limits and special notes, going beyond annotations to detail output content.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose, return types, usage guidance. No unnecessary words; each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description lists expected returns (character limits, media specs, rate limits, notes). References sibling tool for context. Sufficient for this simple query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with a single 'platform' parameter, including an enum and description. Description does not add additional meaning beyond what the schema provides, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get detailed content requirements for a platform' and lists specific return types (character limits, media specifications, etc.). Differentiates from sibling 'publish_content' by noting that this tool provides complete details rather than a quick reference.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this when you need specifics like exact character counts, file size limits, or supported formats.' Also contrasts with publish_content, providing clear when-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_platform_settings_schemaGet Platform Settings SchemaARead-onlyIdempotentInspect
Get the JSON schema contract for content.platformSettings., including platform-specific publish options, validation-only gates, and supported per-platform content overrides.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | Platform ID to inspect |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds context about what the schema includes (publish options, validation gates, overrides), which helps set expectations for the return value without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently packs the key information: action (get), resource (JSON schema contract), path, and included details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description effectively explains the return value (JSON schema contract) and its contents. It could be slightly more explicit about the structure (e.g., JSON object), but it is sufficient for a simple one-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides full coverage (100%) with a description for the 'platform' parameter. The description does not add meaning beyond what the schema already states, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves the JSON schema contract for a specific platform, specifying the resource path and contents (publish options, validation gates, content overrides). This is distinct from sibling tools which focus on connecting, posting, or managing content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is used to inspect platform settings schema but does not explicitly state when to use it or when to prefer alternatives. No exclusion criteria or alternative tool names are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_post_analyticsGet Post AnalyticsARead-onlyInspect
Fetch drilldown analytics for one published post, including per-snapshot metrics and campaign context.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for postId. | |
| postId | No | Published post ID to inspect. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| publishedPostId | No | Compatibility alias for postId. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only and non-destructive. Description adds that it targets one published post and includes specific metrics, providing useful context beyond annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 14 words, front-loaded with verb 'Fetch'. No redundant information or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool, description covers what data is returned (per-snapshot metrics, campaign context). No output schema, but description suffices. Minor missing detail on what 'per-snapshot' means, but still complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema describes all 4 parameters thoroughly (100% coverage). Description does not add meaning beyond what schema already provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'Fetch', resource 'drilldown analytics for one published post', and specifies what it includes ('per-snapshot metrics and campaign context'). Distinct from sibling tools like 'get_analytics'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this vs alternatives. The name and description imply use for per-post analytics, but no exclusions or alternative recommendations provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_producthunt_analyticsGet Product Hunt AnalyticsARead-onlyIdempotentInspect
Get analytics for your products on Product Hunt.
Returns: • Total products, votes, comments, reviews • Number of featured products • Per-product metrics including vote counts and ratings
Works with read-only access (no whitelisting required).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum products to fetch (default 20) | |
| username | No | Product Hunt username (optional, defaults to connected account) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds value by stating no whitelisting needed and listing return fields, which aligns with and extends annotation context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences plus a bullet list. Purpose is front-loaded, no redundant information. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two optional parameters and no output schema, the description covers purpose, return metrics, and access condition. Could optionally mention pagination behavior, but schema covers limit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add additional parameter-specific meaning beyond what the schema already provides. The bullet list mentions metrics but not directly linked to parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get analytics for your products on Product Hunt' and lists specific metrics returned. It differentiates from generic analytics siblings by specifying Product Hunt context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'read-only access (no whitelisting required)' but does not explicitly state when to use this tool over alternatives like get_analytics or get_post_analytics. Usage is implied by the name and Product Hunt specificity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_project_analysisGet Project AnalysisARead-onlyInspect
Fetch stored project analysis, strategy, recent generations, and activity.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. | |
| projectId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond annotations by listing the returned data types (analysis, strategy, generations, activity). Annotations already declare readOnlyHint=true and destructiveHint=false, so the description's disclosure of stored nature and data scope is helpful but not extensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It front-loads the main action and resource clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple fetch tool with two parameters and no output schema, the description covers the key return data types. It lacks details on error handling, pagination, or limits, but given low complexity, it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not explain or add meaning to parameters. Schema description coverage is 50% (only team_id has a description). The missing projectId description is not compensated by the tool description, leaving ambiguity about its format or usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches stored project analysis, strategy, recent generations, and activity. The verb 'Fetch' is specific and aligns with read-only nature. It distinguishes from siblings like 'analyze_project' (which creates analysis) and 'generate_project_strategy'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. The purpose implies usage for retrieving existing analysis, but no exclusions or alternatives are mentioned. Siblings like 'analyze_project' are not referenced.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_scheduled_postGet Scheduled PostARead-onlyIdempotentInspect
Fetch one scheduled post with approval, recurrence, retry, and published-link metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for postId. | |
| postId | No | Scheduled post ID. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| scheduleId | No | Compatibility alias for postId. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds value by revealing that the response includes approval, recurrence, retry, and published-link metadata, providing behavioral context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, highly concise. It front-loads the action and resource. However, it could include a brief note about the accepted ID parameters without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with no output schema, the description adequately explains the metadata returned. It does not mention the anyOf parameter options, but the schema already handles that. Sufficient for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add any information about the parameters (e.g., difference between postId, scheduleId, id) beyond what the schema provides. No added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch' and the resource 'one scheduled post', and specifies the metadata included (approval, recurrence, retry, published-link). This distinguishes it from sibling tools like 'get_scheduled_posts' (plural) and 'edit_scheduled_post'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching a single post with full details, but does not explicitly state when to use this tool versus alternatives. No guidance on when not to use it or which ID parameter to prefer.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_scheduled_postsGet Scheduled PostsBRead-onlyIdempotentInspect
List scheduled posts and their current status, including pending, publishing, TikTok draft-delivered, published, failed, or cancelled posts.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Optional forward-compatible maximum number of scheduled posts to return. | |
| cursor | No | Optional forward-compatible pagination cursor from a previous scheduled-post listing response. | |
| status | No | Optional forward-compatible status filter. Legacy handlers may continue returning pending posts only. | |
| endDate | No | Optional forward-compatible upper date bound in YYYY-MM-DD format. | |
| team_id | No | Team ID to list team's scheduled posts. If omitted, lists personal posts. | |
| platform | No | Filter by platform (optional) | |
| upcoming | No | Optional forward-compatible flag to prefer upcoming scheduled posts. | |
| startDate | No | Optional forward-compatible lower date bound in YYYY-MM-DD format. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, fully covering the safety profile. The description adds no new behavioral traits beyond repeating that it lists posts with status, which is already evident from the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the action 'List scheduled posts' and specifies included statuses. While it could be trimmed slightly, it is efficient and avoids unnecessary fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with no required parameters and full schema coverage, the description is adequate but does not mention pagination or the response format (e.g., that it returns a list with cursor). Without an output schema, adding such details would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with clear descriptions for all 8 optional parameters. The tool description does not enhance understanding of parameters, such as how 'upcoming' or 'cursor' interact with the listing. Baseline 3 is appropriate as schema already suffices.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists scheduled posts with their statuses, including specific statuses. However, it does not differentiate itself from sibling tools like 'get_scheduled_post' (singular) or 'list_calendar_events', missing an opportunity to clarify when this tool is appropriate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or context. For example, it does not indicate that 'get_scheduled_post' is for a single post while this lists multiple.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tiktok_creator_infoGet TikTok Creator InfoARead-onlyIdempotentInspect
Fetch fresh TikTok Content Posting API creator info for the connected account. Use this before rendering TikTok publish settings.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to fetch the team TikTok account. If omitted, uses the personal TikTok account. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the description's mention of 'fresh' adds limited value. No contradictions, and the description does not disclose additional behavioral traits like data source or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences with no wasted words. The purpose is front-loaded, making it easy for an agent to quickly understand the tool's core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not detail what the returned creator info contains (e.g., fields, structure). While the tool is simple, the agent might need to know what data it receives. The description covers purpose and usage context but lacks output details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one optional parameter (team_id) described adequately. The description does not add any param-specific information beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'fetch' and resource 'TikTok Content Posting API creator info' for the connected account, with a specific usage context: 'before rendering TikTok publish settings.' This distinguishes it from sibling tools like get_platform_requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this before rendering TikTok publish settings,' providing a clear context of use. It does not mention when not to use or alternatives, but this is minor given its straightforward purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_unified_ad_reportGet Unified Ad ReportARead-onlyInspect
Get a cross-platform advertising performance report.
Aggregates metrics across all connected ad platforms into a single view. Useful for comparing performance across Meta, Google, LinkedIn, TikTok, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| endDate | Yes | End date | |
| groupBy | No | Group results by dimension | |
| startDate | Yes | Start date |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive nature. Description adds that the report aggregates metrics across all connected ad platforms, providing behavioral context. No contradictions. Could mention if data is real-time or cached, but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no waste. First sentence is the title action, second adds context. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple nature of the tool (read-only report with three parameters), the description, annotations, and schema together provide complete information. No output schema needed for a standard report.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented in the schema. Description does not add additional meaning beyond what the schema provides (dates, groupBy). Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Title and description clearly state it retrieves a cross-platform advertising performance report, aggregating metrics across platforms. It distinguishes itself from other analytics tools like get_ad_performance and get_unified_analytics by specifying it covers ad platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description indicates usefulness for comparing performance across Meta, Google, LinkedIn, TikTok, etc., providing clear usage context. However, it does not explicitly state when not to use this tool versus alternatives like get_ad_performance or get_unified_analytics.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_unified_analyticsGet Unified AnalyticsARead-onlyInspect
Get cross-platform analytics combining organic and paid performance.
Aggregates metrics across all connected platforms into a single view. Supports time-series breakdowns and platform comparison.
| Name | Required | Description | Default |
|---|---|---|---|
| endDate | Yes | End date (YYYY-MM-DD) | |
| breakdown | No | Breakdown dimension | |
| platforms | No | Filter to specific platforms (optional, all if omitted) | |
| startDate | Yes | Start date (YYYY-MM-DD) | |
| includePaid | No | Include paid metrics (default true) | |
| includeOrganic | No | Include organic metrics (default true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that it aggregates metrics across platforms, which is useful context but does not disclose additional behavioral traits like performance impact or data freshness. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with front-loaded purpose. No wasted words. Every sentence adds value: purpose, aggregation behavior, supported breakdowns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a read-only analytics tool with well-documented parameters and annotations. Lacks prerequisites (e.g., connected platforms) or examples. No output schema reduces need for return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 6 parameters. The description mentions 'time-series breakdowns and platform comparison', mapping to the breakdown parameter, but does not add meaningful detail beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states 'Get cross-platform analytics combining organic and paid performance', which clearly identifies the tool's function and distinguishes it from siblings like 'get_analytics' (generic) or 'get_unified_ad_report' (ad-only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context that it aggregates across platforms and supports breakdowns, but does not explicitly state when to use this tool over alternatives (e.g., 'get_analytics' for single-platform) or when not to use it. No direct comparison or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_upload_sessionGet Upload SessionARead-onlyIdempotentInspect
Check the status of an upload session and get the media URL(s) once uploaded.
Call this after the user clicks the upload link to see if they've completed the upload.
Returns: • status: 'pending' - User hasn't uploaded yet • status: 'uploaded' - Upload complete, includes mediaUrl, mediaUrls, fileCount, mediaItems • status: 'expired' - Session expired (15 min limit)
| Name | Required | Description | Default |
|---|---|---|---|
| sessionId | Yes | The session ID returned from create_upload_session |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description adds value by detailing the possible statuses (pending, uploaded, expired) and the session timeout of 15 minutes. This fully communicates expected behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences of main text followed by a clear bullet list of possible statuses and their accompanying fields. No unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the main return values and statuses but omits error handling or what happens with invalid session IDs. For a simple status check tool, it is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'sessionId' well-described. The description does not add additional semantic detail beyond referencing the session ID, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks the status of an upload session and retrieves media URLs upon completion. It distinguishes itself from sibling tools like 'create_upload_session' by explicitly indicating it is called after upload.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance to call this tool after the user clicks the upload link, and it references the session ID from create_upload_session. While it doesn't list alternative tools, the context makes usage clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_workflow_runGet Workflow Run DetailsARead-onlyInspect
Get detailed step-by-step execution log for a workflow run.
| Name | Required | Description | Default |
|---|---|---|---|
| runId | Yes | Workflow run ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so no safety concerns. The description adds that it returns 'step-by-step execution log', implying a chronological record, but does not disclose potential size limits, pagination, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that conveys the core purpose without extraneous words. It is well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter set (1 required) and no output schema, the description gives a reasonable overview. However, the return format (e.g., steps with timestamps, status) is not detailed, which could leave agents uncertain about the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already has a clear description for 'runId' (100% coverage). The description adds no additional meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly states the resource ('workflow run') and refined output ('detailed step-by-step execution log'). It naturally distinguishes itself from siblings like 'list_workflow_runs' and 'get_agent_run'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for retrieving logs, but does not explicitly state when to use it over alternatives (e.g., 'list_workflow_runs' for summaries) or mention any prerequisites or restrictions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
import_project_sourceImport Project SourceCInspect
Import GitHub, app-store, website, file, or media source context into a project.
| Name | Required | Description | Default |
|---|---|---|---|
| sources | Yes | ||
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. | |
| projectId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and openWorldHint=true, which the description partially reinforces by mentioning external sources. However, it does not disclose side effects, required permissions, or interaction details (e.g., fetching from URLs vs. uploading files).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence without redundancy. However, it could benefit from structured details like bullet points or examples to improve clarity for a multi-source tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (multiple source types, no output schema, open-world hint), the description is insufficient. It does not explain return values, error conditions, or how to specify each source type precisely, which is necessary for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (only 33%; team_id has a description). The description adds context for the 'sources' parameter by listing types (GitHub, app-store, etc.), but does not explain the required format or structure for each type, leaving ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool imports source context (GitHub, app-store, website, file, or media) into a project. It specifies a verb ('Import') and a resource ('source context'), but does not differentiate it from sibling tools like 'save_to_library' or 'upload_media'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Sibling tools exist for similar actions, but the description offers no conditions, prerequisites, or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
invoke_agentInvoke AI AgentAInspect
Invoke a specific AI agent with given inputs.
The agent will execute within policy constraints and return structured output. All agent runs are logged for audit and traceability.
| Name | Required | Description | Default |
|---|---|---|---|
| input | Yes | Agent-specific input parameters | |
| agentId | Yes | Agent to invoke | |
| context | No | Additional context (brand voice ID, date range, etc.) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-read-only and non-destructive. The description adds that the agent executes within policy constraints, returns structured output, and logs runs for audit. This provides useful behavioral context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences, each adding meaningful information. The first sentence states the primary action, the second clarifies constraints and output, and the third mentions logging. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks details about what the agent does with the input, how to structure the input object, or what the output looks like. For a tool with no output schema and open-world hint, more context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with descriptions. The description does not add extra meaning beyond the schema, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Invoke a specific AI agent with given inputs' and mentions returning structured output. It distinguishes from sibling tools like list_agents (which lists) and get_agent_policies (which gets policies) by focusing on execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., get_agent_run, list_agent_runs). It does not mention prerequisites or when not to use it, leaving the agent to infer from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_ad_accountsList Ad AccountsARead-onlyInspect
List all connected advertising accounts across platforms. Returns ad accounts from Meta Ads, Google Ads, LinkedIn Ads, TikTok Ads, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | No | Filter by ad platform (e.g., 'meta_ads', 'google_ads_search') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, making the safety profile clear. The description adds context about multi-platform aggregation but does not disclose pagination, rate limits, or other behavioral nuances.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each adding value. The first sentence states the purpose, the second provides examples. No fluff or repetition. Efficiently front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one optional parameter and no output schema, the description adequately covers purpose and scope. It could mention return format or behavior when no ad accounts exist, but is sufficient given tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single optional parameter 'platform' is fully described in the input schema (100% coverage) with examples. The tool description adds no additional meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all connected advertising accounts across platforms'), specifying multiple platforms (Meta Ads, Google Ads, etc.). It distinguishes from siblings like 'list_connected_accounts' by focusing solely on ad accounts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus alternatives (e.g., 'list_connected_accounts' or 'list_ad_campaigns'). The description implies use for ad accounts but does not provide exclusions or context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_ad_campaignsList Ad CampaignsARead-onlyInspect
List campaigns for an ad account with status and performance summary.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by status | |
| adAccountId | Yes | Ad account ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so safety is clear. The description adds that the tool returns a 'status and performance summary,' but lacks details on pagination, sorting, or limits. Since annotations cover the key behavioral traits, the description adds moderate value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that conveys the core purpose without unnecessary words. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with two parameters, no output schema, and good annotations, the description covers the resource, scope, and return summary. Missing details like pagination or default ordering are minor, but overall it is adequately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both parameters (adAccountId and status). The description does not add additional meaning beyond what the schema already provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List campaigns'), the resource ('for an ad account'), and includes detail about what is returned ('status and performance summary'). It effectively distinguishes from sibling tools like list_ad_accounts and create_ad_campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context ('List campaigns for an ad account') but does not explicitly state when to use this tool over alternatives or exclude use cases. However, the context is sufficient for a straightforward list operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_agent_runsList Agent RunsBRead-onlyInspect
List recent agent runs with status and summary.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results to return (default 20) | |
| status | No | Filter by status (optional) | |
| agentId | No | Filter by agent (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true and destructiveHint=false. Description adds no further behavioral context (e.g., pagination, ordering, or implications). It is consistent with annotations and adequate for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that conveys the core purpose efficiently. Front-loaded with verb and resource. No extraneous information, but could be more structured (e.g., bullet points for clarity).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 3 parameters, no output schema, and annotations present, the description covers basic purpose but omits ordering, default limit (20), and pagination details. Adequate but not fully self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers all 3 parameters with clear descriptions (100% coverage). The tool description does not add extra meaning beyond 'recent' and 'status/summary' which are already implied. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states action (list), resource (agent runs), and included data (status, summary). However, it does not explicitly differentiate from sibling tools like 'get_agent_run' or 'list_agents', which weakens clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., list_agents or get_agent_run). Does not mention context, prerequisites, or when-not-to-use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_agentsList AI AgentsARead-onlyInspect
List all available AI agents and their capabilities.
SendIt includes 12 specialized agents: • Strategy Planner - Content strategy from audience/trend analysis • Content Ideation - Topic ideas from trends and calendar gaps • Multi-Format Composer - Platform-optimized content from a brief • Creative Asset - AI image/video generation orchestration • Variant Repurposer - Repurpose content for different platforms • Calendar Optimizer - Optimal posting time suggestions • Listening Analyst - Social mention and sentiment analysis • Inbox Reply - Contextual reply drafts with brand voice • Campaign Builder - Ad campaign structure recommendations • Budget Optimizer - Spend pacing and budget reallocation • Experimentation - A/B test design and analysis • Executive Insights - Executive summary reports
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false; the description adds value by listing the agents but does not disclose additional behavioral traits beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One clear opening sentence followed by a bullet list of 12 agents; every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless list tool with no output schema, the description fully covers what the tool does and what it returns, making it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%; baseline for 0 parameters is 4, and the description does not need to add meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'List all available AI agents and their capabilities' with a specific verb and resource, and the bullet list of 12 agents clearly distinguishes from sibling tools like invoke_agent or get_agent_policies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is obvious for listing agents; no explicit when-not-to-use or alternatives, but the context is clear and no confusion arises among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_analytics_reportsList Analytics ReportsARead-onlyInspect
List saved analytics report definitions for the current personal or team scope.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false. The description adds the scope distinction but no further behavioral traits. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded, no wasted words. Efficiently conveys purpose and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema (1 optional param), no output schema, and adequate annotations, the description fully covers the tool's behavior. No missing details for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a well-described team_id parameter. The description adds no additional parameter meaning beyond the schema, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists saved analytics report definitions, specifying personal or team scope via the team_id parameter. This distinguishes it from sibling tools like run_analytics_report or get_analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the scope context (personal vs team) and mentions the optional team_id parameter to switch scope. It does not explicitly list alternatives or when not to use, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_brand_voicesList Brand VoicesARead-onlyIdempotentInspect
List all brand voice profiles. Brand voices guide AI content generation with tone, personality, writing rules, and example posts.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, destructiveHint, and idempotentHint, so the safety profile is clear. The description adds context about brand voices but no additional behavioral traits like pagination or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with the key action first and supporting context second. No redundant words; structure is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one optional parameter and no output schema, the description adequately conveys purpose and context. The explanation of brand voices adds value. Minor omission: no mention of pagination or result format, but not critical for this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (team_id documented). The description does not add parameter-level details beyond the schema, but the schema itself is sufficient. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all brand voice profiles and explains what brand voices are (guide AI content generation). It uses specific verb 'List' and resource 'brand voice profiles', and is distinct from siblings like create_brand_voice and set_default_brand_voice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives like create_brand_voice or set_default_brand_voice. The context of 'list all' is implied but not formally differentiated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_calendar_eventsList Calendar EventsARead-onlyIdempotentInspect
List scheduled and published calendar events with optional date-range, status, platform, and pagination filters.
| Name | Required | Description | Default |
|---|---|---|---|
| end | No | Optional range end. ISO 8601 date or date-time. | |
| limit | No | Maximum number of results to return. | |
| start | No | Optional range start. ISO 8601 date or date-time. | |
| cursor | No | Opaque pagination cursor from a previous response. | |
| status | No | Optional event status filter. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| upcoming | No | If true, return only upcoming events. | |
| platforms | No | Optional platform filter. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds minimal behavioral context, only noting that events are 'scheduled and published' by default. It does not clarify pagination behavior or potential limits beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 15 words, front-loaded with the verb and resource, and contains no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with 8 optional parameters and full schema coverage, the description is adequate. However, absence of output schema means the agent has no explicit info on return format, though this is not required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented. The description only summarizes filter types (date-range, status, etc.) without adding new semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (List) and resource (calendar events), and specifies filter types. However, it does not differentiate from sibling tools like get_scheduled_posts or list_recurring_series, which could cause ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for filtered list operations but provides no explicit guidance on when to use this tool versus alternatives. No when-not-to-use or alternative tools are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_connected_accountsList Connected AccountsARead-onlyIdempotentInspect
List all connected social media accounts. Pass team_id to see a team's accounts instead of your personal ones.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to list team accounts. Get available teams with list_teams. If omitted, lists personal accounts. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description does not need to reiterate safety. The description adds the behavior of filtering by team_id, but no further behavioral details are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two clear, front-loaded sentences with no extraneous information. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool, the description covers the core functionality and the optional filter. It does not specify the return format (e.g., a list of account objects), but given the lack of an output schema and the tool's simplicity, this is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already fully describes the single optional parameter team_id (100% coverage). The description reiterates its purpose, which adds minimal value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('list') and the resource ('connected social media accounts'), and distinguishes between personal and team accounts via the team_id parameter. This differentiates it from sibling tools like list_teams or list_connectors.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates when to use the tool (to list connected accounts) and how to scope it to a team. It does not explicitly state when not to use it or provide alternatives, but the context from siblings makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_connected_connectorsList Connected ConnectorsARead-onlyInspect
List all currently connected connectors with their status and health. Includes both legacy platform connections and new connector connections.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by category (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the tool is safe. The description adds value by noting it includes 'both legacy platform connections and new connector connections', providing context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. The purpose is front-loaded, and every part adds meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema is provided, but the description mentions 'status and health' as output content. While a bit more detail on the exact fields would improve completeness, the tool is straightforward and the mention of legacy vs new connections adds sufficient context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The one parameter 'category' is fully described in the input schema with an enum and description. The description does not add any extra meaning or usage context for the parameter, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List all currently connected connectors with their status and health' uses a specific verb ('List') and resource ('connected connectors'), and differentiates from sibling tools like 'list_connectors' (which likely lists all connectors) and 'get_connector_health' (single connector).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. It only states what it does. The existence of 'list_connectors' as a sibling suggests a need for differentiation, but no exclusion or alternative is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_connectorsList Available ConnectorsARead-onlyInspect
List all available connectors in the SendIt platform.
Returns connectors organized by category: organic, paid_media, automation, workspace. Each connector includes its ID, name, status, auth strategy, and capabilities.
Use this to discover what integrations are available before connecting.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by availability status (optional) | |
| category | No | Filter by connector category (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description adds value by detailing the return structure (grouped by category, includes ID, name, status, etc.). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with four sentences, each serving a distinct purpose: purpose, organization, fields, and usage. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 optional parameters, no nested objects, no output schema), the description fully covers what the tool does, what it returns, and how to use it. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers both parameters with enums and descriptions. The description does not add additional meaning beyond what the schema provides, meeting the baseline for 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List all available connectors in the SendIt platform' with a specific verb and resource. It also explains the organization by category, which distinguishes it from sibling tools like specific connect_{platform} tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context with 'Use this to discover what integrations are available before connecting,' indicating when to use the tool. However, it does not explicitly mention when not to use it or provide alternatives, which would be a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_conversationsList ConversationsARead-onlyInspect
List unified inbox conversations across all connected channels.
Aggregates comments, mentions, DMs, and messages from all organic and workspace connectors into a single inbox view.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20) | |
| status | No | Filter by conversation status | |
| priority | No | Filter by priority | |
| sentiment | No | Filter by sentiment | |
| assignedTo | No | Filter by assignee | |
| connectorId | No | Filter by connector/platform |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds context about aggregation but does not disclose pagination, ordering, or potential performance implications. It provides sufficient context beyond annotations without being overly detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently convey the purpose and scope, with no extraneous information. Front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains what is returned (conversations from various sources) but could mention pagination or default ordering. Without an output schema, a bit more detail on return structure would be beneficial, but it is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The tool description does not add new semantic meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it lists unified inbox conversations across all connected channels, aggregating various message types. This clearly distinguishes it from sibling tools like get_conversation (single conversation) and reply_to_conversation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it aggregates from all organic and workspace connectors, implying a broad scope. It does not explicitly contrast with list_inbox or specify when not to use it, but the context is clear enough given the tool's explicit scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_dead_letterList Dead Letter PostsARead-onlyIdempotentInspect
List posts that failed all retry attempts and were moved to the dead letter queue. Review and decide to requeue or discard them.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum items to return (default 50) | |
| status | No | Filter by status (default: dead) | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, so the description does not repeat safety. It adds behavioral context by explaining that these posts have 'failed all retry attempts,' which helps the agent understand the state of the data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences only: first defines the action, second provides guidance on next steps. No redundant words, highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description is adequate for a listing tool. It explains the context (posts that failed retries) and the decision process. No output schema exists, so the description could optionally mention return fields, but it is not critical for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions for all 3 parameters. The description does not add additional meaning beyond the schema. Baseline score of 3 is appropriate as the schema carries the full definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List posts that failed all retry attempts and were moved to the dead letter queue', which is a specific verb (list) and resource (dead letter posts). It distinguishes from sibling tools like requeue_dead_letter, which performs an action on those posts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context: 'Review and decide to requeue or discard them', indicating a two-step process after listing. It implicitly advises using this tool before deciding on requeue/discard actions. However, it does not explicitly mention when not to use it or alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_inboxList Social InboxARead-onlyIdempotentInspect
List comment threads on your published posts.
Returns threads from your social inbox, showing comments and replies from followers on your posts across all platforms.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max threads to return (default 20) | |
| status | No | Filter by thread status (optional) | |
| platform | No | Filter by platform (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description's minimal behavioral context is acceptable. It does not add details like pagination, ordering, or rate limits. A 3 is appropriate for adequate but not enhanced transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the main purpose and add a clarifying detail. No wasted words; every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains what is returned (comment threads with replies), but without an output schema, it lacks details on sorting, nesting, or pagination. Adequate for a simple list tool, but could be improved.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds no extra parameter meaning beyond what the schema already provides. Baseline of 3 given that the schema is self-sufficient, but no further value from description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists comment threads on published posts, specifying the resource (comment threads) and scope (social inbox across all platforms). This distinguishes it from sibling tools like list_conversations (likely direct messages) and get_inbox_summary (summary).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like list_conversations or get_inbox_summary. While the purpose is clear, situational cues are absent, leaving the agent to infer.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_libraryList Content LibraryARead-onlyIdempotentInspect
List saved content from your library. Returns drafts, templates, and evergreen content.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Filter by content type | |
| limit | No | Maximum items to return (default 20, max 100) | |
| search | No | Search in title and text | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| category | No | Filter by category |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the description's addition of 'saved content' and 'drafts, templates, evergreen' adds some context beyond annotations but does not cover pagination, rate limits, or response structure. With annotations present, the description is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with only 9 words, front-loading the key action 'List saved content'. Every word is essential, and there is no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
In the absence of an output schema, the description only mentions return types but does not explain pagination, filtering behavior, or the effect of optional parameters. While the schema covers details, the description could better integrate these aspects for a complete picture.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add any parameter-specific details beyond what the schema already provides (e.g., type enum, limit range). No extra value added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists saved content from the library and specifies it returns drafts, templates, and evergreen content. This distinguishes it from sibling tools like get_library_item (single item) or create_library_item (add), making the purpose specific and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., get_library_item for a single item, list_media_assets for media). It lacks explicit context for selection decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_media_assetsList Media AssetsARead-onlyIdempotentInspect
List media library assets with filtering by collection, type, tags, search, and pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| kind | No | Optional broad media kind filter. | |
| tags | No | Optional tag filter. Assets matching any supplied tag may be returned. | |
| limit | No | Maximum number of results to return. | |
| cursor | No | Opaque pagination cursor from a previous response. | |
| search | No | Optional filename search query. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| mimeType | No | Optional exact MIME type filter such as image/png or video/mp4. | |
| collection | No | Optional collection name filter. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds no behavioral context beyond that, but is consistent. No extra details on authentication, rate limits, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded with verb and resource, efficiently summarizing capabilities. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and the description does not explain return values or pagination structure. For a list tool, the description is minimal but adequate given the dense parameter descriptions and annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters have descriptions. The description lists filter types but does not add meaning beyond the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (list) and resource (media library assets), and lists key filtering capabilities (collection, type, tags, search, pagination). This distinguishes it from siblings like list_library or upload_media, though no explicit differentiation is needed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use (when listing assets with optional filters) but does not provide explicit guidance on alternatives or when not to use. No exclusions or comparisons to sibling tools are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_pending_approvalsList Pending ApprovalsARead-onlyIdempotentInspect
List all scheduled posts that are pending approval. These posts won't be published until approved.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, destructiveHint. Description adds that posts are not published until approved, but lacks details on rate limits or authentication beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the action, no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter, read-only list tool, the description fully covers inputs, outputs, and behavior. No output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. Description adds meaning about the purpose but not parameter details, which is acceptable given zero parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'List all scheduled posts that are pending approval' with a specific verb and resource, distinguishing it from sibling tools like approve_post or get_scheduled_posts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use for viewing pending approvals but does not explicitly state when to use it over alternatives like get_scheduled_posts or before approve_post.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_projectsList ProjectsCRead-onlyInspect
List imported product/codebase projects for social strategy and content generation.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| search | No | ||
| status | No | ||
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so safety is clear. Description adds minimal behavioral context beyond that, such as filtering or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence front-loads the purpose. Efficient use of words with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and the description is brief. Missing details on return format, pagination, or filtering behavior, making it incomplete for a list tool with four parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 25% schema description coverage (only team_id described), the description does not add meaning to parameters like limit, search, or status. No additional explanation provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists 'imported product/codebase projects for social strategy and content generation,' specifying the resource and context. It distinguishes from sibling list tools by focusing on projects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'list_media_assets' or 'list_calendar_events'. The description lacks explicit usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_recurring_seriesList Recurring SeriesARead-onlyIdempotentInspect
List recurring scheduled-post series and their recurrence metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return. | |
| cursor | No | Opaque pagination cursor from a previous response. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so safety profile is clear. The description adds no further behavioral context (e.g., pagination behavior or effect on data).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single clear sentence with no wasted words. It is appropriately brief for a simple list operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only list tool with full schema coverage and no output schema, the description adequately conveys the tool's purpose. However, it could mention that results are paginated via cursor/limit, which is already in schema but not in the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% parameter description coverage. The description does not add meaning beyond what the schema already provides for limit, cursor, and team_id.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists recurring scheduled-post series with recurrence metadata, specifying both the resource and its attributes. It distinguishes well from siblings like cancel_recurring_series or update_recurring_series.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by naming the resource, but does not explicitly state when to use this tool over alternatives (e.g., list_calendar_events). No when-not or prerequisite guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_teamsList TeamsARead-onlyIdempotentInspect
List all teams you belong to. Returns team names, IDs, and your role. Use the returned team ID with the team_id parameter in other tools to operate in team context.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, fully covering safety and idempotency. The description adds that it returns team names, IDs, and role, which is useful but not essential beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences, no wasted words. The first sentence states purpose and returns, the second provides usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema, clear purpose), the description is complete. It covers what the tool does, what it returns, and how to use the result, which is all necessary context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and schema description coverage is 100%. The description does not need to explain parameters, and with zero parameters, the baseline score is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List', the resource 'teams', and the scope 'all teams you belong to'. It also specifies the return fields (names, IDs, role), distinguishing it from other list tools that list different entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that the returned team ID should be used in other tools for team context, providing clear guidance on when to use this tool. It does not explicitly mention when not to use or alternatives, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_workflow_runsList Workflow RunsARead-onlyInspect
List recent runs for a workflow.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 20) | |
| status | No | ||
| workflowId | Yes | Workflow ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, so the read-only nature is known. Description adds 'recent runs' implying a time filter but doesn't specify what defines 'recent' or mention pagination/ordering.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, five words, no redundancy. Perfectly front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a low-complexity list tool with strong annotations, the description is mostly sufficient. Could be improved by noting that results are recent or sorted, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (workflowId and limit described). Description adds no additional parameter meaning beyond what's in schema, e.g., no explanation of status filtering or limit default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List recent runs for a workflow.' uses a specific verb ('List') and resource ('recent runs'), clearly distinct from siblings like 'get_workflow_run' (single run) and 'list_workflows' (list all workflows).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives (e.g., get_workflow_run) or when not to use it. The requirement for a workflowId is implied by schema but not stated in description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_workflowsList WorkflowsARead-onlyInspect
List all automation workflows with their status and trigger info.
| Name | Required | Description | Default |
|---|---|---|---|
| active | No | Filter by active/inactive (optional) | |
| triggerType | No | Filter by trigger type (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. Description adds minimal extra information (status and trigger info). No mention of pagination, ordering, or other behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence of 9 words, front-loaded with verb and resource, no unnecessary words. Highly concise and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema is provided, so the description could detail return fields beyond status and trigger info. It omits common fields like workflow ID, name, or timestamps, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. The description adds no additional meaning beyond what the schema provides, achieving the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all automation workflows with status and trigger info, distinguishing it from sibling tools like create_workflow, update_workflow, or list_workflow_runs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. alternatives like list_workflow_runs. The description does not mention that this tool retrieves the workflow definitions themselves, not their execution history.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
preview_contentPreview ContentARead-onlyIdempotentInspect
Generate a visual preview of how content will appear on each platform.
USE THIS WHEN: • Before publishing to see how posts will look • To validate content against platform requirements • To check character counts, hashtag limits, and media requirements
Returns an HTML preview mockup for each platform with validation results: • Character count vs limit • Hashtag count (Instagram has 30 max) • Media requirement check • Platform-specific warnings and errors
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| platforms | Yes | Platforms to generate previews for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent behavior. The description adds value by detailing the output (HTML preview with validation results) and platform-specific checks, which annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loaded with purpose, and uses bullet points for usage and output. Every sentence earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no output schema, the description adequately explains return values (HTML preview with validation). It covers purpose, usage, and output but lacks error handling or edge cases. Still sufficient for a preview tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50%, but the description does not add parameter-level details beyond what the schema provides. It mentions output structure but not input nuances, so it fails to compensate for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates a visual preview of content appearance per platform, with specific verb+resource. It distinguishes from sibling tools like publish_content or edit_post by focusing on preview and validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'USE THIS WHEN' section explicitly lists scenarios like before publishing, validation, checking limits. It lacks explicit 'when not to use' or alternatives, but the context is clear and helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_aiAI Generate & PublishADestructiveInspect
Generate AI content and publish it in a single step.
Combines generate_content + publish_content into one call. First generates platform-optimized content using AI, then publishes to the specified platforms.
Same platform requirements as publish_content apply.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | No | Creative prompt for AI content generation | |
| hashtags | No | ||
| mediaUrl | No | Media URL for the post | |
| strictAi | No | ||
| mediaType | No | Media type hint | |
| mediaUrls | No | Multiple media URLs for carousel | |
| platforms | Yes | Target platforms to generate content for and publish to | |
| generation | No | ||
| contentOverrides | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructive and non-read-only behavior. The description adds that it first generates then publishes, and applies platform requirements. No contradictions, but no major behavioral context beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, front-loaded with the primary purpose, and each sentence adds value without redundancy. No filler or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the tool's complexity (9 parameters, nested objects, destructive side effects, no output schema), the description is minimal. It lacks details on return values, error handling, prerequisites, and what happens during the generation-publish process, leaving gaps for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 56%, and the description does not elaborate on any parameters beyond referencing platforms. It fails to compensate for the gap left by the schema, leaving many parameters (e.g., prompt, hashtags, mediaUrl) without additional explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates and publishes content in one step, using specific verbs and identifying the resource (AI content). It distinguishes itself from siblings like generate_content and publish_content by framing it as a combined operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes that it combines generate_content and publish_content, implying use when both actions are desired, and mentions same platform requirements as publish_content. However, it does not explicitly state when to avoid it or compare with alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_contentPublish ContentAInspect
Publish content to social media platforms.
MEDIA RULES: • mediaUrl must be a public HTTPS URL — NOT a local file path. • If the user shares an image/video in chat, call create_upload_session FIRST to get a browser upload link, then use the returned URL here. • Text-only works on: LinkedIn, Threads, X, Facebook. • Image required: Instagram, Pinterest. • TikTok supports one video or 1-35 Photo Mode images. • Video required: YouTube.
Call validate_content to check before publishing.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| team_id | No | Team ID to publish as team. Get available teams with list_teams. If omitted, publishes from personal accounts. | |
| platforms | Yes | Target platforms. Choose based on content type: text-only→LinkedIn/Threads/X, image→Instagram/Threads/Facebook/Pinterest/TikTok Photo Mode, video→Instagram/TikTok/Threads/YouTube |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a mutation (readOnlyHint=false) with potential side effects (openWorldHint=true). The description adds behavioral context: mediaUrl must be a public HTTPS URL, not a local file path, and requires prior use of create_upload_session for user-shared media. It also mentions platform-specific content limits (e.g., TikTok video or 1-35 Photo Mode). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear MEDIA RULES section and bullet points for platform-specific requirements. It front-loads the main purpose and keeps each sentence informative. While slightly lengthy, no sentences are wasted, and the structure aids readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (many platforms, nested objects, no output schema), the description provides substantial context: media handling, platform prerequisites, and a reference to validate_content. It does not cover error handling or return values, but the extensive schema and sibling tools fill some gaps, making it adequate for most use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% and includes detailed descriptions for many parameters (e.g., mediaUrl, threadTweets). The description adds value beyond the schema by explaining platform-specific usage (e.g., which platforms require image/video) and the relationship between parameters and platforms, aiding the agent in correct parameter selection.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Publish content to social media platforms,' which is a specific verb-resource pair. It distinguishes from siblings like validate_content (validation) and schedule_content (scheduling) by focusing on immediate publishing. The media rules and platform-specific content types further clarify the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to 'Call validate_content to check before publishing' and provides platform requirements for different content types (e.g., text-only for LinkedIn, image required for Instagram). While it does not explicitly list when not to use the tool, the context is clear and includes a recommended prerequisite.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_from_libraryPublish from LibraryBDestructiveInspect
Publish content directly from your library to one or more platforms.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The content library item ID to publish | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| platforms | Yes | Platforms to publish to |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructive and open-world behavior. The description confirms publishing to platforms but adds no additional details beyond what annotations provide, such as side effects (e.g., using connected accounts) or irreversibility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence of 9 words. Immediately conveys action and scope with no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No explanation of return values, success/failure behavior, prerequisites (e.g., connected accounts), or error handling. For a publishing tool with no output schema, more completeness is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description offers no additional meaning beyond the schema descriptions for id, team_id, and platforms.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it publishes content from the library to platforms. It specifies verb 'publish', resource 'library', and destination. However, it does not explicitly distinguish from sibling 'publish_content' which might serve a similar purpose but without the 'from library' qualifier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'publish_content', 'schedule_content', or 'publish_ai'. No prerequisites or when-not-to-use mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reject_postReject PostAInspect
Reject a scheduled post that is pending approval. The post will not be published.
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | The scheduled post ID to reject | |
| reason | Yes | Reason for rejection | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a write operation (readOnlyHint=false) and not destructive (destructiveHint=false), but they don't detail side effects. The description adds that the post will not be published, but does not disclose whether the action is reversible, triggers notifications, or changes post status beyond that. For a mutation tool, this is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short sentences that are front-loaded with the action. No filler words; every sentence adds value. It is efficient and easy to scan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so description could explain what happens post-rejection (e.g., status change, notifications). It only says 'will not be published'. Given the simplicity of the tool, this is minimally complete but lacks detail on side effects or post-action state.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and all three parameters (postId, reason, team_id) are described in the schema. The description does not add any additional meaning or usage context beyond the schema. The baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'reject', the resource 'scheduled post', and the context 'pending approval'. It also tells the outcome: 'The post will not be published'. This distinguishes it from siblings like approve_post or delete_scheduled_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a post is pending approval and needs rejection, but it does not provide explicit when-not-to-use guidance or mention alternatives such as approve_post or retry_scheduled_post. The context is clear but lacks exclusionary criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reply_to_commentReply to CommentAInspect
Reply to a comment on one of your published posts.
Supported platforms: X (reply tweets), Threads, Instagram, Facebook, YouTube. Not supported: LinkedIn, Pinterest, TikTok.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Your reply text | |
| threadId | Yes | The inbox thread ID to reply to |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states the action but adds minimal behavioral context beyond what annotations already provide. Annotations indicate non-readOnly, non-destructive, non-idempotent, and open-world. The description does not elaborate on what happens when the reply is sent (e.g., immediate posting, any side effects), nor does it mention authentication requirements or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: three sentences. The first sentence states the core purpose, followed by a line each for supported and unsupported platforms. No unnecessary information, and the key details are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (a write action with two parameters) and the absence of an output schema, the description is largely complete. It provides platform restrictions but could mention that the reply is published immediately (since idempotentHint=false implies each call creates a new reply). Still, it is sufficient for most agents to understand the tool's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema description coverage at 100%, the input schema already describes both required parameters ('text' and 'threadId') adequately. The description does not add additional meaning beyond the schema, so the default score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Reply to a comment') and the resource ('one of your published posts'). It distinguishes itself from sibling tools like 'reply_to_conversation' and 'draft_reply' by specifying the context is a comment on a post, and lists supported platforms, which differentiates it from platform-specific tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists supported platforms (X, Threads, Instagram, Facebook, YouTube) and unsupported ones (LinkedIn, Pinterest, TikTok), which helps the agent determine when this tool is applicable. However, it lacks explicit guidance on when to use this tool versus alternatives like 'reply_to_conversation' (for direct messages) or 'draft_reply' (for drafting).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reply_to_conversationReply to ConversationBInspect
Send a reply in a conversation.
The reply is sent through the original channel (e.g., Instagram comment, X reply, Facebook comment, etc.).
Replies require conversation reply routing metadata (credential + operation). Provide "routing" once and it will be stored on the conversation for future replies.
Use the Inbox Reply agent to get AI-drafted replies with brand voice.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Reply text | |
| routing | No | Optional: set conversation reply routing (persisted on conversations.metadata.replyRouting). Use this when the conversation doesn't yet have reply routing configured. | |
| mediaUrls | No | Media to attach (optional) | |
| conversationId | Yes | Conversation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false, destructiveHint=false, and openWorldHint=true. The description adds little beyond the obvious mutability of sending a reply. It does not disclose potential side effects, rate limits, or error states, which burdens the agent to infer from limited info.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences) and front-loaded with the purpose. Every sentence provides necessary context without redundancy. It efficiently covers purpose, channel, routing requirement, and a usage tip.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 4 parameters (2 required) and no output schema. The description fails to mention what the response looks like, error conditions, or how it integrates with sibling tools like 'reply_to_comment' or 'draft_reply'. This leaves significant gaps for an agent to execute correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the parameter descriptions are already adequate. The description adds value for the 'routing' parameter by explaining persistence behavior, but other parameters (content, mediaUrls, conversationId) are not elaborated beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Send a reply in a conversation' and specifies the reply goes through the original channel. It mentions routing metadata and directs to the Inbox Reply agent for drafting. However, it does not explicitly differentiate from sibling tools like 'reply_to_comment' or 'draft_reply', which would strengthen purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that replies require routing metadata and to provide 'routing' once for persistence. It also suggests using the Inbox Reply agent for AI-drafted replies. However, it lacks explicit guidance on when not to use this tool or alternatives (e.g., when to use reply_to_comment instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
repurpose_urlRepurpose URL ContentAInspect
Takes a URL (article, YouTube video, or podcast) and repurposes its content into platform-optimized posts.
Supports: • Web articles — extracts text content • YouTube videos — extracts transcript/captions • Podcast RSS feeds — extracts episode descriptions and show notes
Returns platform-specific formatted output (X threads, Instagram carousel slides, Medium articles, etc.) with a quality score (0-100) for each variant.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Source URL to repurpose (article, YouTube video, or podcast RSS feed) | |
| tone | No | Optional tone override (e.g., 'professional', 'casual', 'witty') | |
| brand_voice_id | No | Optional brand voice profile ID to apply | |
| target_platforms | Yes | Target platforms to generate content for (e.g., ['x', 'linkedin', 'instagram']) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-destructive behavior (destructiveHint=false). The description adds behavioral context by detailing supported URL types and the output (platform-specific formatted posts with quality score). It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise and well-structured with bullet points. Every sentence is informative and earns its place. No redundant or verbose language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers supported URL types and return values (platform-specific formatted output, quality score). Given no output schema, it adequately explains what the tool returns. Missing details on error handling or rate limits, but these are not critical for a content generation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptive parameter descriptions. The description adds value by explaining the output format (platform-specific variants and quality score) and clarifying the URL types, which is not in the schema. However, it does not significantly enhance individual parameter meanings beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool takes a URL (article, YouTube video, or podcast) and repurposes its content into platform-optimized posts. It specifies supported content types and output formats like X threads and Instagram carousel, making the purpose distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when you have a source URL to repurpose but does not explicitly state when to use this tool versus alternatives like generate_content or generate_post_bundle. No exclusions or when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
request_project_contentRequest Project ContentCInspect
Generate UGC, slideshow, remix, caption, or calendar content from project intelligence.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | ||
| count | No | ||
| prompt | No | ||
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. | |
| platforms | No | ||
| projectId | Yes | ||
| contentType | No | ||
| saveToLibrary | No | ||
| consentConfirmed | No | Required when using uploaded creator or likeness-like media. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=false, destructiveHint=false, openWorldHint=true) provide baseline safety info. The description's 'Generate' implies creation, consistent with not read-only. However, it adds no new behavioral traits (e.g., side effects, auth needs, rate limits). Adequate but not enhanced.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence listing content types. Front-loaded with key purpose. No fluff. Could be slightly more structured (e.g., bullet points) but efficiently conveys core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description omits crucial context: what 'project intelligence' means, what the output looks like (no output schema), parameter usage, and how this fits among many sibling tools. For a 9-parameter tool with many siblings, this is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (22%), meaning most parameters lack descriptions in the schema. The tool description does not explain parameters like goal, count, prompt, platforms, saveToLibrary, etc. It only hints at contentType via listed types. This fails to compensate for poor schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Generate' and resource 'UGC, slideshow, remix, caption, or calendar content from project intelligence.' It lists specific content types, making purpose evident. However, it does not explicitly differentiate from sibling tools like generate_content or generate_post_bundle, so it loses the top score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description does not mention prerequisites, context, or exclusions. With many sibling content generation tools, the agent lacks direction on selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
requeue_dead_letterRequeue Dead Letter PostAInspect
Requeue a dead letter post for another publishing attempt. Creates a new scheduled post from the dead letter entry.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| deadLetterId | Yes | The dead letter post ID to requeue |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate mutation (readOnlyHint=false) and non-destructive (destructiveHint=false). The description adds that it creates a new scheduled post, but doesn't specify what happens to the original dead letter entry (e.g., whether it's deleted or marked). Adequate but lacks full behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no unnecessary words, directly conveys purpose and outcome. Efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter tool with no output schema and no nested objects, the description covers the essential action. Could mention side effects on the dead letter entry, but overall complete enough for selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and descriptions for both parameters are provided. The description does not add meaning beyond the schema (e.g., no format or examples for deadLetterId). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Requeue' and the resource 'dead letter post', and distinguishes from publishing or scheduling tools by specifying it creates a new scheduled post from a dead letter entry.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool over alternatives like retry_scheduled_post or publish_content. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retry_scheduled_postRetry Scheduled PostAInspect
Retry a failed scheduled post, optionally editing the content, narrowing to failed platforms, rescheduling, or publishing immediately.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for postId. | |
| text | No | Replacement post text for the retry. | |
| postId | No | Scheduled post ID. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| mediaUrl | No | Replacement public HTTPS media URL for the retry. | |
| mediaUrls | No | Replacement public HTTPS media URLs for the retry. | |
| platforms | No | Optional subset of failed platforms to retry. | |
| publishNow | No | Publish the retry immediately instead of scheduling it. | |
| scheduleId | No | Compatibility alias for postId. | |
| scheduledAt | No | New ISO 8601 retry time. Required unless publishNow is true. | |
| firstComment | No | Optional replacement Instagram first comment. | |
| pinterestBoardId | No | Optional replacement Pinterest board ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description confirms mutation (retry) with optional edits, adding context beyond annotations that mark readOnlyHint=false. No contradiction, and hints at failure state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence (18 words) efficiently covers main functionality with no fluff. Front-loaded and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description adequately explains tool purpose and key options. Slightly unclear on 'failed' meaning, but sufficient for an agent to select correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with detailed descriptions. The high-level description doesn't add new meaning beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action: 'Retry a failed scheduled post', with optional modifications. It distinguishes from siblings like 'edit_scheduled_post' by focusing on retrying failed posts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'edit_scheduled_post' or 'trigger_scheduled_post'. Only implied usage for failed posts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_analytics_reportRun Analytics ReportAInspect
Execute one saved analytics report immediately and return the generated run plus aggregated analytics data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for reportId. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| reportId | No | Saved analytics report ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate the operation is not read-only (readOnlyHint=false) and not destructive (destructiveHint=false). The description adds that it returns data immediately, but does not mention potential side effects like cost or time. With annotations covering safety, the description provides adequate but minimal extra insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the verb and outcome, with no redundant information. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description succinctly describes what is returned ('generated run plus aggregated analytics data'), which is sufficient for a straightforward execution tool. It is complete enough for the agent to understand the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema is fully described (100% coverage), so the description does not need to add parameter details. It mentions 'saved analytics report' which loosely relates to 'reportId', but does not explain the alias 'id' or 'team_id' beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Execute...immediately') and the resource ('saved analytics report'), and specifies the output ('generated run plus aggregated analytics data'). It effectively distinguishes from siblings like 'list_analytics_reports' which list reports rather than execute them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies immediate execution compared to scheduling, but does not explicitly tell the agent when to use this tool over alternatives like 'create_scheduled_report' or 'get_analytics'. No when-not or exclusion guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_project_content_to_librarySave Project Content To LibraryCInspect
Generate project content and save the outputs into the reusable content library.
| Name | Required | Description | Default |
|---|---|---|---|
| count | No | ||
| prompt | No | ||
| team_id | No | Optional team ID or slug. Ignored when using a team-scoped API key. | |
| platforms | No | ||
| projectId | Yes | ||
| contentType | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate mutation (readOnlyHint=false) and non-destructive (destructiveHint=false). The description adds no further behavioral detail, such as whether existing content is overwritten or how generation and saving are sequenced.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, achieving brevity but sacrificing necessary detail; it is not overly verbose but fails to convey critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given six parameters, no output schema, and numerous sibling tools, the description is incomplete—missing return value, side effects, parameter usage guidance, and differentiation from similar tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 17% schema description coverage, the description provides no explanation of parameters like projectId, count, prompt, platforms, or contentType, leaving the agent to rely solely on the schema, which has minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (generate and save) and the target (project content to library), but does not distinguish from similar siblings like generate_content or save_to_library, lacking specificity on scope or uniqueness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as generate_content or save_to_library; no prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_to_librarySave to LibraryBInspect
Save content to your library as a draft, template, or evergreen content for reuse.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Tags for organization | |
| text | Yes | The content text/caption | |
| type | No | Content type (default: draft) | |
| title | Yes | Title for the saved content | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| category | No | Category for organization | |
| mediaUrl | No | Media URL (image or video) | |
| targetPlatforms | No | Which platforms this content is designed for | |
| evergreenEnabled | No | Enable evergreen auto-republishing | |
| evergreenIntervalDays | No | Days between evergreen republishes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false and destructiveHint=false, so the agent knows it's a write operation. The description adds no extra behavioral context such as permission requirements or side effects. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the core purpose. No unnecessary words or fluff. Every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 10 parameters and no output schema, the description is too minimal. It fails to explain the scope (team vs personal), additional features like mediaUrl, tags, targetPlatforms, or evergreen settings. The tool is complex but the description is generic.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented. The description only adds context about the 'type' parameter (draft, template, evergreen), which is already in the schema. Baseline 3 is appropriate as the description adds marginal value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (save), object (content to your library), and the types (draft, template, evergreen) with a purpose (for reuse). However, it does not differentiate from sibling tools like 'create_library_item' or 'save_project_content_to_library', which have overlapping functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks any guidance on when to use this tool versus alternatives. It does not mention exclusions, prerequisites, or context such as 'use for existing content only' or 'prefer create_library_item for new items'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
schedule_contentSchedule ContentAInspect
Schedule content for future publishing. Same media rules as publish_content apply.
• mediaUrl must be a public HTTPS URL — call create_upload_session if user shares a file in chat. • Content is validated at schedule time, not publish time.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| team_id | No | Team ID to schedule as team. Get available teams with list_teams. If omitted, schedules from personal accounts. | |
| timezone | No | Optional IANA timezone for queue scheduling and recurring series metadata. | |
| platforms | Yes | Target platforms - ensure content matches each platform's requirements | |
| scoreGate | No | Optional forward-compatible minimum content score threshold for advanced scheduling flows. | |
| recurrence | No | Optional forward-compatible structured recurrence definition. | |
| bypassToken | No | Optional forward-compatible bypass token returned by advanced scheduling score-gate checks. | |
| scheduleMode | No | Optional forward-compatible scheduling mode. 'queue' asks SendIt to place the post in the next available queue slot. | |
| scheduledTime | Yes | ISO 8601 datetime in UTC (e.g., 2025-01-15T14:30:00Z) | |
| recurrenceRule | No | Optional forward-compatible RRULE string for recurring schedules. | |
| targetAccounts | No | Optional advanced scheduling target account IDs. Legacy handlers may ignore this until advanced scheduling is enabled. | |
| mediaAttachments | No | Optional forward-compatible media asset attachments to associate with the scheduled post. | |
| platformOverrides | No | Optional forward-compatible per-platform overrides keyed by platform ID. | |
| recurrenceEndDate | No | Optional forward-compatible ISO 8601 end date for recurrence. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits: same media rules as publish_content, mediaUrl must be public HTTPS, and validation occurs at schedule time. This adds value beyond annotations, which are minimal. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: three sentences plus a bullet list. Every sentence is purposeful, front-loading critical information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (14 parameters, nested objects, no output schema), the description covers the most important behavioral nuances. It could mention expected return value or error cases but is sufficient for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 93%, so the schema already explains most parameters. The description adds significant value by clarifying the mediaUrl requirement and referencing create_upload_session, going beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool schedules content for future publishing. It distinguishes from sibling tools like publish_content (immediate) and schedule_content_advanced by referencing 'same media rules as publish_content apply' and implying a basic scheduling function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on when to use this tool, such as referencing create_upload_session for file uploads and noting validation timing. However, it does not explicitly compare with schedule_content_advanced or outline prerequisites like connected accounts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
schedule_content_advancedSchedule Content AdvancedBInspect
Schedule content with explicit target accounts, queue placement, recurrence, platform overrides, media library attachments, and score gating.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| timezone | No | Optional IANA timezone for queue resolution and recurrence. | |
| scoreGate | No | Optional minimum content score threshold. If unmet, the tool can return a bypass token instead of scheduling. | |
| recurrence | No | Optional structured recurrence definition. | |
| bypassToken | No | Optional bypass token returned from a previous score gate warning. | |
| scheduledAt | No | ISO 8601 time for exact scheduling. Required unless scheduleMode is 'queue'. | |
| scheduleMode | No | Scheduling strategy. Use 'queue' to place the post into the next available queue slot. | |
| recurrenceRule | No | Optional RRULE string for advanced recurring schedules. | |
| targetAccounts | Yes | Connected account IDs to schedule to. These determine the target platforms. | |
| mediaAttachments | No | Optional media library assets to link to the scheduled post. | |
| platformOverrides | No | Optional per-platform overrides keyed by platform ID. Use this for platform-specific text, title, description, or publish settings. | |
| recurrenceEndDate | No | Optional ISO 8601 date-time to stop the recurring series. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a non-read-only, non-destructive mutation, consistent with scheduling. The description does not detail behavioral aspects like score gate failure or bypass token return, which are only in the schema. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the key features. It has no fluff, but could be slightly more structured for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (13 parameters, nested objects, no output schema), the description could provide more context on return values, error handling, or prerequisites. However, the high schema coverage partially compensates.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is high (92%), so the schema already explains parameters. The description adds minimal semantic value beyond listing feature categories, warranting baseline score 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Schedule content' and lists key advanced features. It distinguishes from simpler siblings like 'schedule_content' by enumerating advanced options, though it does not explicitly differentiate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for complex scheduling tasks but lacks explicit when-to-use guidance, alternatives, or contraindications. The sibling tool 'schedule_content' provides a contrast, but no direct instruction is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
schedule_multilang_contentSchedule Multilang ContentAInspect
Schedule the same post in multiple languages, linking all variants into one translation group.
| Name | Required | Description | Default |
|---|---|---|---|
| title | No | Optional shared title for supported platforms. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| mediaUrl | No | Optional shared public HTTPS media URL. | |
| timezone | No | Optional IANA timezone stored with the scheduled posts. | |
| versions | Yes | Language-specific post variants. Maximum 11 versions per request. | |
| mediaType | No | Optional shared media type hint. | |
| mediaUrls | No | Optional shared public HTTPS media URLs. | |
| scheduledAt | Yes | Future ISO 8601 publish time for every language version. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-destructive, non-read-only behavior. The description adds that variants are linked into one translation group, providing some behavioral context. However, it doesn't discuss idempotency, side effects on existing groups, or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that encapsulates the tool's primary function without any wasted words. It's front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, 2 required, no output schema), the description is minimal. It doesn't explain what a translation group is, how to check results, or confirm successful scheduling. The annotations provide no additional context. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description does not need to elaborate on parameters. The description does not add meaning beyond the schema, hence baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Schedule), the resource (the same post in multiple languages), and the distinctive feature (linking variants into one translation group). This differentiates it from siblings like schedule_content which likely schedules a single post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like schedule_content or schedule_content_advanced. No prerequisites or use-cases are mentioned, leaving the agent to infer appropriateness from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
score_contentScore Content QualityARead-onlyIdempotentInspect
Score content quality on a 0-100 scale before publishing.
Evaluates 5 factors (20 points each):
Text length optimization for target platforms
Hashtag count optimization
Posting time alignment with best engagement windows
Media presence (images/videos)
Content patterns (CTA, hooks, formatting, emoji)
Returns overall score, per-factor breakdown, and improvement suggestions.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The post text content | |
| mediaUrl | No | Primary media URL (optional) | |
| mediaUrls | No | Multiple media URLs for carousel (optional) | |
| platforms | Yes | Target platforms to score against | |
| scheduledTime | No | Scheduled publish time in ISO 8601 (optional, improves time score) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, confirming safe, non-destructive operation. The description adds value by explaining the return structure (overall score, per-factor breakdown, improvement suggestions) and detailing the five evaluation factors, which is rich behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, and the list of factors is concise and scannable. While efficient, it could be slightly more compact without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 factors, 5 parameters) and no output schema, the description covers what the tool does, how parameters are used, and what the output contains (score, breakdown, suggestions). It is complete enough for an agent to understand when and how to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds meaning by explaining how each parameter contributes to the scoring factors (e.g., text and platforms are used, scheduledTime improves time score, mediaUrl/mediaUrls affect media presence). This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Score' and resource 'content quality' with a specific scale (0-100) and timing ('before publishing'). It effectively distinguishes from sibling tools like critique_post and validate_content by specifying the quantitative scoring approach and pre-publishing context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage before publishing but does not explicitly tell when not to use this tool or mention alternatives among siblings (e.g., critique_post, validate_content). The guidance is implicit but lacks explicit exclusions or comparative context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_default_brand_voiceSet Default Brand VoiceAInspect
Set a brand voice profile as the default. The default brand voice is automatically used by all AI content generation tools.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The brand voice profile ID to set as default |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description indicates a write operation (setting), which aligns with annotations (readOnlyHint=false). Annotations already cover safety (destructiveHint=false). Description adds minimal behavioral detail beyond what annotations provide, such as the effect on AI tools, which is helpful but not extensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no extra words. The purpose and effect are front-loaded. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple setter tool with one required parameter and no output schema, the description fully covers what the tool does and its impact (used by AI generation tools). Nothing missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with a clear description for 'id'. The tool description does not add extra meaning beyond the schema's parameter description. Baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states action ('Set as default') and resource ('brand voice profile'), and explains its downstream effect ('automatically used by all AI content generation tools'). Distinguishes from sibling tools like create_brand_voice and list_brand_voices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context: to make a brand voice the default for AI tools. However, does not explicitly state when not to use it or mention alternatives, though the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_next_schedule_timeSuggest Next Schedule TimeARead-onlyIdempotentInspect
Suggest the next recommended publish time using best-times insights or configured queue slots without creating a post.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Use 'best_time' for analytics-based suggestions or 'queue' for queue-slot suggestions. | |
| after | No | Optional lower bound. Suggestions should be strictly after this time. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| platform | No | Single platform to optimize for. | |
| timezone | No | Optional IANA timezone for the returned recommendation. | |
| mediaType | No | Optional content media type hint for best-time recommendations. | |
| platforms | No | Optional platform set to optimize across. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds 'without creating a post' but no additional behavioral context like rate limits or error cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with essential information, no redundancy, and the key action is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description does not clarify the return format or how agents should use the suggested time. For a simple read operation, it's adequate but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description mentions 'best-times' and 'queue' modes, but adds no extra meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it suggests a publish time without creating a post, distinguishing it from sibling tools like schedule_content or get_best_times.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for previewing, but does not explicitly state when to use this over alternatives like schedule_content or suggest_queue_slots.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
suggest_queue_slotsSuggest Queue SlotsARead-onlyIdempotentInspect
Suggest reusable queue slots derived from best-time insights for one or more platforms.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| timezone | No | Optional IANA timezone for the suggested local queue times. | |
| platforms | No | Optional platform list. If omitted, suggestions may be generated for all supported platforms. | |
| slotsPerPlatform | No | How many suggested slots to return per platform. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint, idempotentHint, destructiveHint) already indicate safe behavior. The description adds context about using 'best-time insights', which is useful beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 12 words. Extremely concise with no superfluous information. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not explain the format of 'queue slots' or what the return looks like. While annotations cover safety, the description lacks some behavioral detail for a suggestion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3 applies. The description adds 'derived from best-time insights' but does not elaborate on individual parameters beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool suggests reusable queue slots derived from best-time insights for platforms. The verb 'suggest' and resource 'queue slots' are specific, and it distinguishes from siblings like 'get_best_times' and 'suggest_next_schedule_time' by focusing on reusable slots.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for generating slot suggestions but does not explicitly state when to use this tool versus alternatives like 'get_best_times' or 'suggest_next_schedule_time'. No direct comparisons or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
summarize_mentionsSummarize MentionsBInspect
Cluster and summarize recent social mentions by theme and sentiment using AI.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max mentions to analyze (default: 30, max: 50) | |
| since | No | ISO date string — only include mentions after this date | |
| platform | No | Filter to a specific platform | |
| keyword_id | No | Filter to mentions matching a specific keyword |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide non-read-only and non-destructive hints. The description adds 'using AI', implying non-deterministic behavior (consistent with openWorldHint=true). However, it does not clarify if the tool creates new data or merely returns a summary, nor does it mention any side effects or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the core action. No wasted words, but it is very brief. It could be slightly expanded to include context without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no behavioral details beyond annotations, the description is adequate but minimal. It lacks information about the output format, how to interpret results, or any constraints. For a tool with 4 parameters, it does not fully compensate for the missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all four parameters. The description does not reference any parameters or add meaning beyond the schema. Baseline 3 is appropriate since the description adds no parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool clusters and summarizes social mentions by theme and sentiment using AI. It specifies the verb (cluster and summarize), the resource (social mentions), and the method (AI). Among many sibling tools, none directly overlap, making its purpose distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description does not mention prerequisites, typical use cases, or situations where this tool should be avoided. Given the large number of sibling tools, explicit guidance is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trigger_scheduled_postTrigger Scheduled PostAInspect
Manually trigger a scheduled post to publish immediately.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| scheduleId | Yes | The ID of the scheduled post to trigger |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the key behavioral trait: it publishes the post immediately. Annotations already indicate non-read-only and non-destructive, which align. No additional side effects (e.g., irreversibility) are mentioned, but for a simple trigger this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the action and result, with no extraneous words. It earns its place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple trigger tool with only one required parameter and a clear action, the description provides complete context. No output schema is needed as the result is obvious.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameter descriptions in the schema are clear. The description adds no extra semantic information beyond what the schema provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('trigger'), resource ('scheduled post'), and immediate effect ('publish immediately'). It distinguishes from scheduling or editing tools by focusing on manual immediate publication.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use it: when you want to override a scheduled post's timing and publish now. However, it does not explicitly mention when not to use it or compare with alternatives like 'retry_scheduled_post'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trigger_workflowTrigger WorkflowBInspect
Manually trigger a workflow run.
| Name | Required | Description | Default |
|---|---|---|---|
| inputData | No | Input data to pass to the workflow run | |
| workflowId | Yes | Workflow ID to trigger |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a mutating (readOnlyHint=false) and non-destructive (destructiveHint=false) operation, but the description adds no further behavioral details. It does not mention side effects (e.g., starting an execution, error states) or any constraints like rate limits. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence of 5 words with no redundant information. It is appropriately front-loaded and concise for a simple trigger action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool without an output schema, the description is incomplete. It does not indicate what the tool returns (e.g., a run ID) or what happens on success/failure. While annotations and schema cover basics, the lack of result behavior is a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already fully describes both parameters (workflowId and inputData) with 100% coverage. The description adds no additional meaning beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'trigger' and resource 'workflow run', which effectively distinguishes it from siblings like 'create_workflow' (defines a workflow) and 'delete_workflow' (removes a workflow).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as automated scheduling or other trigger tools like 'trigger_scheduled_post'. There is no mention of prerequisites (e.g., workflow must exist) or context about manual vs. automatic triggers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_ad_campaignUpdate Ad CampaignBInspect
Update campaign settings (name, budget, dates, status).
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | New campaign name | |
| status | No | Change campaign status | |
| endDate | No | Updated end date | |
| campaignId | Yes | Campaign ID | |
| budgetAmount | No | Updated budget amount |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-read-only (readOnlyHint=false) and non-destructive (destructiveHint=false), which aligns with the 'update' description. However, the description adds no additional behavioral context such as permission requirements, field merging behavior, or side effects. The annotation coverage is decent, so the description carries little extra burden but still misses opportunities to clarify behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no extraneous words. It is front-loaded with the verb and resource, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a 5-parameter schema with full coverage and no output schema, the description is minimally adequate. It does not mention the required campaignId or potential return values, but the schema provides necessary details. It could be more complete by noting constraints or common use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description lists parameter categories (name, budget, dates, status) but does not add meaning beyond the schema's own parameter descriptions. It groups them but provides no new details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Update campaign settings (name, budget, dates, status)', which specifies the verb (update), resource (campaign settings), and specific fields. It is easily distinguishable from sibling tools like create_ad_campaign and list_ad_campaigns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention that it is for modifying existing campaigns, nor does it reference sibling tools like create_ad_campaign for creation or get_ad_performance for analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_agent_policyUpdate Agent PolicyBInspect
Create or update a policy/guardrail for an agent.
Policy types: • brand_safety - Content must align with brand guidelines • platform_compliance - Must follow platform ad policies • approval_gate - Require human approval before execution • budget_limit - Cap spending or resource usage • content_filter - Filter certain topics or language
| Name | Required | Description | Default |
|---|---|---|---|
| rules | Yes | Policy rules | |
| active | No | Whether policy is active | |
| agentId | Yes | Agent to set policy for | |
| policyType | Yes | Policy type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate mutability (readOnlyHint=false) and non-destructive mutation (destructiveHint=false). The description adds no further behavioral context, such as whether updating a policy affects active agents or requires special permissions. With annotations carrying some burden, the description still lacks deeper disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: one sentence and a bullet list. It is front-loaded with the main action and then elaborates on types. No extraneous information, though the term 'guardrail' is slightly redundant with 'policy'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers policy types and basic purpose but misses details like what happens on update vs. create, the meaning of the 'active' field, or the structure of rule objects. With no output schema, the agent does not know what the response will be. Adequate but with notable gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, baseline 3. The description adds value by explaining the meanings of each policy type (e.g., 'brand_safety - Content must align with brand guidelines'), which the schema enums do not include. This helps an AI agent select the correct policyType and understand the purpose of rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates or updates agent policies with a list of policy types. However, it does not explicitly differentiate from sibling tools like get_agent_policies, which is a complementary read operation, missing an opportunity to clarify the write focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives or when to create vs. update. Missing prerequisites or context (e.g., requiring an existing agent). The policy type list hints at use cases but does not provide when-not-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_calendar_eventUpdate Calendar EventAIdempotentInspect
Update one calendar event by rescheduling it, changing status, reassigning it, or storing metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for eventId. | |
| postId | No | Compatibility alias when the calendar event maps directly to a scheduled post. | |
| status | No | Updated event status. | |
| eventId | No | Calendar event ID or backing scheduled post ID. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| metadata | No | Optional event metadata patch. | |
| assigneeId | No | Optional assignee user ID. | |
| scheduledAt | No | Updated ISO 8601 schedule time for the event. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=false, destructiveHint=false, idempotentHint=true. Description adds no extra behavioral context beyond 'Update'. No contradiction, but no added value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with main action. No wasted words. Appropriate length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main update actions but omits team scope, alias parameters, and return behavior. Schema covers details, but description could provide more holistic context for a tool with 8 params and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description groups actions to parameters but adds minimal meaning beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Update' with resource 'calendar event' and lists specific actions: rescheduling, status change, reassignment, metadata. Distinguishes from siblings like bulk_update_calendar_events and cancel_recurring_series.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. Does not mention alternatives or prerequisites. Context signals show many sibling tools, but description lacks usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_conversationUpdate ConversationCInspect
Update conversation status, priority, assignment, or tags.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Set tags | |
| status | No | ||
| priority | No | ||
| sentiment | No | ||
| assignedTo | No | Assign to team member ID | |
| conversationId | Yes | Conversation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate non-destructive write. Description adds no behavioral details beyond that, such as idempotency or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence, front-loading the purpose. Could be slightly more informative without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Missing important context like partial update behavior, required conversation existence, and return value. For a 6-param update tool with no output schema, more detail is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description summarizes updateable fields (status, priority, assignment, tags) which adds minimal value over schema (50% coverage). It does not clarify types or constraints beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it updates conversation fields (status, priority, assignment, tags), distinguishing it from related tools like get_conversation (read) and reply_to_conversation (reply). However, it omits 'sentiment' which is in the schema.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives, prerequisites, or context. With many sibling tools, this is a missed opportunity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_library_itemUpdate Library ItemAInspect
Update an existing content library item.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The content library item ID | |
| tags | No | Tags | |
| text | No | New content text | |
| type | No | Content type | |
| title | No | New title | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| category | No | Category | |
| mediaUrl | No | Media URL | |
| evergreenEnabled | No | Enable evergreen | |
| evergreenIntervalDays | No | Republish interval in days |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description 'Update' aligns with annotations (readOnlyHint=false, destructiveHint=false). No additional behavioral traits (e.g., partial vs full update, auth requirements) are disclosed beyond what annotations already provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is efficient and free of fluff. However, it could be slightly expanded to mention the required 'id' parameter without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 10 parameters and no output schema, the description is too sparse. It fails to explain whether updates are partial or full replacements, or what the response contains, leaving significant gaps for a complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description is not expected to elaborate on parameters. It adds no extra meaning over the parameter descriptions already present in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update an existing content library item' clearly states the verb (update) and resource (existing content library item), distinguishing it from siblings like create_library_item and delete_library_item.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the item must exist for updating, but does not provide explicit when-to-use or when-not-to-use guidance compared to related tools like create or delete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_recurring_seriesUpdate Recurring SeriesAIdempotentInspect
Update the recurrence rule or end date for future occurrences in a recurring series.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | Compatibility alias for seriesId. | |
| team_id | No | Team ID to operate in team scope. Get available teams with list_teams. If omitted, uses personal scope. | |
| seriesId | No | Recurring series parent scheduled post ID. | |
| recurrenceRule | No | Replacement RRULE string. | |
| recurrenceEndDate | No | Replacement ISO 8601 end date for the series. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate idempotency and non-destructiveness. The description adds value by specifying that the update affects only future occurrences, which is not in annotations. It does not cover auth or rate limits but the context is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys the essential purpose without any superfluous words. It is highly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple update tool with no output schema, the description provides the core functionality. It could mention the requirement of seriesId or id, but the schema covers that. Overall, it is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add meaning beyond listing the updateable fields (recurrence rule, end date). The baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates recurrence rule or end date for future occurrences in a recurring series, using specific verb and resource. It distinguishes from siblings like cancel_recurring_series and list_recurring_series.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for updating future occurrences but does not explicitly state when to use this vs alternatives like cancel_recurring_series, nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_workflowUpdate WorkflowCInspect
Update a workflow's configuration or status.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| steps | No | ||
| active | No | ||
| workflowId | Yes | Workflow ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds minimal behavioral context beyond annotations. Annotations indicate a non-read-only, non-destructive mutation, but the description does not disclose potential side effects, required permissions, rate limits, or the effect of updating 'active' status. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. However, it is slightly under-specified; a bit more detail would improve clarity without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with 4 parameters, no output schema), the description is too minimal. It does not explain the update process, response format, or constraints, leaving the agent with significant gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 25% schema description coverage (only workflowId described), the description must compensate. It vaguely mentions 'configuration or status', which maps to name/steps/active but does not explain each parameter's purpose or constraints. This is insufficient for an agent to correctly set parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'update' and the resource 'workflow', and mentions what can be updated ('configuration or status'). It distinguishes this tool from siblings like create_workflow, delete_workflow, list_workflows, and trigger_workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description only implies it is for modifying existing workflows, but does not specify prerequisites, common use cases, or contrast with related tools like update_agent_policy or update_calendar_event.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_mediaUpload MediaAInspect
FOR CLAUDE DESKTOP ONLY (with filesystem access). For Claude.ai/web: Use create_upload_session instead - it provides a browser upload link.
Upload local media to cloud storage, returning a public HTTPS URL.
WHEN TO USE: • Instagram, LinkedIn, Threads, X: REQUIRED for local files before calling publish_content • TikTok: NOT NEEDED - pass local path directly to publish_content
SUPPORTED FORMATS: • Images: jpg, png, gif, webp (max 10MB) • Videos: mp4, mov, webm (max 100MB)
Returns { url: 'https://...' } for use in publish_content mediaUrl parameter.
| Name | Required | Description | Default |
|---|---|---|---|
| folder | No | Optional folder name for organizing uploads | |
| filePath | No | Local file path to upload (e.g., ~/Photos/image.jpg) | |
| mediaUrl | No | Existing public URL to validate (alternative to filePath) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses beyond annotations: returns a public HTTPS URL, supported formats and size limits, and the return format. Annotations indicate write operation (readOnlyHint=false) and non-destructive, so no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (context, when to use, formats, output). Every sentence adds value, no fluff. Front-loaded with critical usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explicitly states return format { url }. Covers all necessary aspects: prerequisites, platform-specific behavior, file restrictions. Complete for an upload tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3. Description adds context: folder for organization, filePath for local upload, mediaUrl for validation. This adds meaning beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool uploads local media to cloud storage and returns a public URL. Distinguishes from sibling create_upload_session by specifying it's for Claude Desktop with filesystem access, while create_upload_session is for web.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use (for Instagram, LinkedIn, Threads, X before publish_content) and when not needed (TikTok). Also distinguishes between Claude Desktop and Claude.ai/web, referencing the alternative create_upload_session.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upvote_productUpvote Product Hunt ProductADestructiveInspect
Upvote a product on Product Hunt.
REQUIREMENTS: • Must have Product Hunt account connected • Write access requires app whitelisting by Product Hunt
Provide the Product Hunt post ID (not the slug).
| Name | Required | Description | Default |
|---|---|---|---|
| postId | Yes | The Product Hunt post ID to upvote |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond annotations by specifying auth requirements (account connection, whitelisting). Annotations already indicate destructiveHint=true and readOnlyHint=false, which the description supports. However, it does not disclose side effects like irreversibility or behavior if already upvoted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with three sentences: purpose, requirements, and parameter clarification. Every sentence is necessary and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one required parameter and no output schema, the description covers the core requirements and parameter nuance. It doesn't explain the response, but for a straightforward action this is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for postId. The description adds value by explicitly warning against using the slug instead of the ID. This guidance is helpful and goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action ('upvote') and resource ('a product on Product Hunt'). It is distinct from sibling tools and uses a specific verb+resource pattern.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit requirements: having a connected Product Hunt account and needing app whitelisting for write access. It also clarifies to use the post ID, not the slug. It does not explicitly mention when not to use or alternatives, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_contentValidate ContentARead-onlyIdempotentInspect
Validate content against platform requirements BEFORE publishing.
USE THIS WHEN: • Unsure if content will work on target platforms • Publishing to multiple platforms with different requirements • Want to catch errors before attempting publish
Returns specific errors (e.g., 'TikTok requires video', 'Instagram needs media') and warnings (e.g., 'text close to character limit').
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| platforms | Yes | Platforms to validate against |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the non-mutating nature is clear. The description adds that it returns specific errors/warnings, which is useful context but not rich behavioral detail. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise paragraphs with a bullet list for usage guidelines. Front-loaded purpose, every sentence adds value, no redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a validation tool with a complex nested schema and no output schema, the description explains the return type (errors and warnings) with concrete examples. While it doesn't detail exact output structure, it provides enough context for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%, meaning half the properties lack descriptions in the schema. The description does not elaborate on any parameters beyond what the schema already provides, failing to compensate for the gap. Examples of errors given are about output, not parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Validate' and identifies the resource 'content against platform requirements' with a clear context of 'BEFORE publishing'. This clearly distinguishes it from sibling tools like publish_content, schedule_content, or edit_post.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes a dedicated 'USE THIS WHEN' section with three explicit scenarios (unsure if content works, multi-platform publishing, catching errors before publish). No explicit 'when NOT to use', but the positive guidance is strong enough for an agent to infer appropriateness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!