synter-ads
Server Details
Manage ad campaigns across Google, Meta, LinkedIn, Reddit, TikTok, and more via AI.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Synter-Media-AI/mcp-server
- GitHub Stars
- 9
- Server Listing
- Synter MCP Server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
63 toolsbuild_lookalike_audienceCInspect
Build ML-based lookalike audience from seed customers (10 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | ||
| seed_audience | Yes | ||
| expansion_factor | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions '10 credits' indicating a cost, but lacks critical behavioral details: whether this is a read/write operation (implied write from 'Build'), expected processing time, permissions required, rate limits, or what happens upon execution (e.g., creates a new audience object). The ML-based aspect is noted but without elaboration on model behavior or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place: 'Build ML-based lookalike audience' states the action, 'from seed customers' specifies the input, and '(10 credits)' adds cost context. However, it could be more structured with separate usage or parameter notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which reduces need to describe returns), but with no annotations, 3 parameters at 0% schema coverage, and complexity from ML-based processing, the description is incomplete. It covers the basic purpose and cost, but misses parameter details, behavioral traits, and usage context. For a tool that likely creates significant resources with ML, more guidance is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'seed customers' which relates to 'seed_audience' parameter, but doesn't explain what 'seed_audience' should contain (e.g., customer IDs, segments) or format. It doesn't address 'platform' or 'expansion_factor' at all, leaving three parameters largely undocumented. The description adds minimal value beyond the schema titles.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Build ML-based lookalike audience') and resource ('from seed customers'), specifying it's a machine learning process. It distinguishes from siblings like 'list_audiences' or 'sync_audience' by focusing on creation rather than listing or syncing. However, it doesn't explicitly differentiate from other creation tools like 'create_campaign_for_audience' in terms of audience type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions '10 credits' which implies a cost, but provides no explicit guidance on when to use this tool versus alternatives. There's no mention of prerequisites (e.g., needing seed customers first), comparison to other audience tools, or scenarios where this is preferred over other methods. The credit cost is noted but without context on alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_campaign_for_audienceCInspect
Create a campaign targeting an existing audience (20 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| headline | No | ||
| platform | Yes | ||
| final_url | No | ||
| account_id | No | ||
| audience_id | Yes | ||
| description | No | ||
| daily_budget | No | ||
| campaign_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '20 credits' which hints at cost/consumption behavior, but doesn't disclose other critical traits: whether this is a mutation (implied by 'Create'), what permissions are needed, whether the campaign starts immediately, what happens on failure, or what the response contains. For a campaign creation tool with 8 parameters and no annotations, this is insufficient behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that communicates the core action and includes cost information. Every word earns its place, and it's front-loaded with the essential information. There's no wasted verbiage or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (campaign creation with 8 parameters), no annotations, 0% schema coverage, but with an output schema present, the description is incomplete. While the output schema may cover return values, the description doesn't address parameter meanings, behavioral expectations, or usage context. For a tool that likely creates paid advertising campaigns with budget implications, this level of documentation is inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 8 parameters (3 required, 5 optional), the description provides absolutely no information about any parameters. It doesn't explain what 'platform' accepts, what format 'audience_id' should be in, what 'daily_budget' units are, or any other parameter meaning. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create' and the resource 'campaign targeting an existing audience', making the purpose specific and understandable. It distinguishes from siblings like 'create_campaign_plan' by specifying it targets an existing audience, though it doesn't explicitly contrast with all possible alternatives. The mention of '20 credits' adds operational context but doesn't fully differentiate it from other campaign-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'create_campaign_plan', 'forecast_campaign', or 'enable_campaign'. It mentions 'targeting an existing audience' which implies a prerequisite of having an audience, but doesn't specify when this is the appropriate choice among campaign creation methods or what alternatives exist for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_campaign_planBInspect
Create or update a campaign launch plan (5 credits). Use plan_key for idempotency.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | ||
| plan_key | Yes | ||
| brief_json | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '5 credits' (cost implication) and idempotency via plan_key, which adds value. However, it doesn't describe permissions needed, whether this creates or updates based on plan_key existence, rate limits, or what the output contains—significant gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—two short sentences with zero waste. It's front-loaded with the core purpose and includes essential details (credits, idempotency) efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a mutation tool with 3 parameters, 0% schema coverage, no annotations, but an output schema exists, the description is incomplete. It covers cost and idempotency but misses parameter details, behavioral context like permissions, and doesn't leverage the output schema to explain returns. It's minimally adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description only mentions 'plan_key' for idempotency, adding minimal semantics. It doesn't explain 'title' or 'brief_json' (their purposes, formats, or constraints), leaving most parameters unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create or update') and resource ('campaign launch plan'), making the purpose understandable. It doesn't explicitly distinguish from all siblings like 'upsert_plan_entity' or 'create_campaign_for_audience', but the focus on 'campaign launch plan' provides reasonable differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Use plan_key for idempotency,' which provides some usage context about idempotent operations. However, it doesn't specify when to use this tool versus alternatives like 'upsert_plan_entity' or 'create_campaign_for_audience,' nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_documentBInspect
Create a document in the Campaign IDE editor (free). Perfect for reports, audits, strategy docs, and plans that can be collaboratively edited.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | ||
| content | Yes | ||
| organization_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool is 'free' and supports 'collaboratively edited' documents, adding some behavioral context. However, it doesn't disclose critical details like required permissions, rate limits, whether the document is saved automatically, or error handling, which are important for a creation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, front-loading the main purpose and following with use cases. There's no wasted text, but it could be slightly more structured by separating functional details from examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema, the description doesn't need to explain return values. However, for a creation tool with 3 parameters and no annotations, it lacks sufficient detail on behavior and parameter usage. It's minimally adequate but has clear gaps in transparency and parameter guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It doesn't explain any parameters, such as what 'title' and 'content' should contain, or the purpose of 'organization_id'. The description adds no meaning beyond the schema, failing to address the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a document') and resource ('in the Campaign IDE editor'), specifying it's free and listing use cases like reports and audits. It distinguishes from siblings like 'create_google_doc' by mentioning the Campaign IDE editor, but doesn't explicitly contrast with all similar tools like 'create_google_sheet' or 'publish_plan_document'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for collaborative editing of documents like reports and plans, providing some context. However, it lacks explicit guidance on when to use this tool versus alternatives such as 'create_google_doc' or 'publish_plan_document', and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_google_docCInspect
Create a Google Doc from markdown or HTML content (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| html | No | ||
| title | Yes | ||
| content | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the cost ('5 credits'), which is useful context, but fails to describe critical behaviors like required permissions, whether the document is saved to Google Drive, what happens on failure, or the output format. For a mutation tool with zero annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose and cost. It's appropriately front-loaded with the main action. However, the parenthetical cost note could be integrated more smoothly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which handles return values), the description's main gaps are behavioral transparency and parameter semantics. For a 3-parameter mutation tool with no annotations, the description should provide more context about permissions, error handling, and parameter usage to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'markdown or HTML content' which hints at the 'content' and 'html' parameters, but doesn't explain their relationship, format requirements, or that 'title' is required. The description adds minimal value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a Google Doc') and specifies the input format ('from markdown or HTML content'), which distinguishes it from generic document creation tools. However, it doesn't explicitly differentiate from the sibling 'create_document' tool, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions the cost ('5 credits'), which provides some usage context, but offers no guidance on when to use this tool versus alternatives like 'create_document' or other sibling tools. There are no explicit when/when-not instructions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_google_sheetCInspect
Create a Google Sheet from tabular data (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| rows | No | ||
| title | Yes | ||
| headers | No | ||
| json_data | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions a credit cost ('5 credits'), which is useful operational context, but fails to describe other critical behaviors: what permissions are needed, where the sheet is created (user's drive vs shared), whether it's editable, what happens on failure, or the format of the created sheet. For a creation tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence with no wasted words. It's front-loaded with the core purpose and includes a useful operational constraint (credit cost). Every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool creates a resource (complex operation), has 4 parameters with 0% schema coverage, no annotations, but does have an output schema, the description is minimally adequate. The output schema existence means return values don't need description, but the description should do more to explain parameter usage and behavioral constraints for a creation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 4 parameters (1 required), the description provides no information about parameters beyond implying 'tabular data' input. It doesn't explain what 'rows', 'title', 'headers', or 'json_data' mean, their relationships, or how they map to the created sheet structure. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('Google Sheet') with additional context about the data source ('from tabular data'). It distinguishes from sibling tools like 'create_document' and 'create_google_doc' by specifying the spreadsheet format. However, it doesn't explicitly mention what distinguishes it from other data creation tools in the list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, when this tool is appropriate versus other creation tools, or any exclusions. The credit cost mention ('5 credits') is a usage constraint but not a guideline for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_landing_pageCInspect
Generate an AI landing page hosted on your custom domain (10 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| slug | No | ||
| style | No | modern | |
| title | Yes | ||
| prompt | Yes | ||
| cta_url | Yes | ||
| cta_text | No | Get Started |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a cost ('10 credits'), which is useful context, but fails to describe key behaviors: whether this is a creation/mutation operation (implied by 'Generate'), what permissions or authentication are needed, if it's rate-limited, what the output looks like, or how errors are handled. For a tool with 6 parameters and no annotations, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that front-loads the core action ('Generate an AI landing page') and includes essential constraint ('hosted on your custom domain') and cost ('10 credits'). There's zero wasted verbiage, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no annotations, but with an output schema), the description is incomplete. It lacks guidance on usage versus siblings, detailed parameter semantics, and behavioral context like authentication or error handling. While the output schema may cover return values, the description doesn't compensate for the gaps in other areas, making it insufficient for effective tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameter titles (e.g., 'Title', 'Prompt') provide no semantic meaning. The description adds no parameter information beyond implying 'custom domain' (not a direct parameter) and cost. It doesn't explain what 'slug', 'style', 'prompt', or other parameters do, leaving most of the 6 parameters undocumented and unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generate an AI landing page hosted on your custom domain'. It specifies the verb ('Generate'), resource ('AI landing page'), and key constraint ('hosted on your custom domain'), making the action distinct. However, it doesn't explicitly differentiate from sibling tools like 'publish_landing_page' or 'update_landing_page_html', which prevents a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance: it mentions a cost ('10 credits') but offers no context on when to use this tool versus alternatives (e.g., 'publish_landing_page' or 'update_landing_page_html'). There's no mention of prerequisites, such as needing a custom domain setup via 'setup_custom_domain', or exclusions, leaving the agent with little direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
enable_campaignCInspect
Enable/resume a paused campaign on any ad platform (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | ||
| account_id | No | ||
| campaign_id | Yes | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions '5 credits' which indicates a cost or resource usage, adding some context. However, it doesn't disclose critical behavioral traits such as required permissions, whether the action is reversible, potential side effects, or response format. For a mutation tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with a single sentence that efficiently conveys the core action and a key constraint (credits). There is no wasted verbiage, and it is front-loaded with the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation operation on campaigns), lack of annotations, and 0% schema description coverage, the description is incomplete. It doesn't explain parameters, behavioral nuances, or usage context. While an output schema exists (which might cover return values), the description fails to provide sufficient guidance for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate by explaining parameters. It adds no information about the four parameters (platform, account_id, campaign_id, account_name) beyond what the schema provides. The mention of 'any ad platform' loosely relates to the 'platform' parameter but doesn't specify format or constraints, failing to address the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('enable/resume') and resource ('a paused campaign on any ad platform'), making the purpose specific and understandable. It doesn't explicitly differentiate from sibling tools like 'pause_campaign', but the verb 'enable/resume' inherently contrasts with 'pause', providing some implicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions '5 credits' which hints at a cost implication, but doesn't specify prerequisites, conditions for use, or comparisons with sibling tools like 'pause_campaign' or 'list_campaigns' for checking campaign status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
executeBInspect
Execute any Synter action: create campaigns, generate AI images/videos, upload to YouTube, manage GTM/GA4, analyze competitors, and more. Use list_available_scripts to see all actions.
| Name | Required | Description | Default |
|---|---|---|---|
| args | No | ||
| action | Yes | ||
| platform | No | ||
| account_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'Execute any Synter action' but doesn't disclose behavioral traits such as whether this is a read-only or destructive operation, what permissions are required, rate limits, or what the output looks like. For a tool that likely performs various actions (some potentially mutative), this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first states the purpose and examples, and the second provides a usage guideline. It's front-loaded with key information and has zero wasted words, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a general execution tool with 4 parameters and many sibling tools), no annotations, and an output schema exists (which reduces the need to describe return values), the description is somewhat complete but lacks critical details. It covers the basic purpose and a prerequisite, but fails to address behavioral aspects, parameter meanings, or differentiation from siblings, leaving gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description doesn't explain any parameters beyond implying 'action' is needed (via 'Execute any Synter action'), but it doesn't clarify what 'args', 'platform', or 'account_id' mean or how they relate to the actions. With 4 parameters and no schema help, the description adds minimal semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Execute[s] any Synter action' and provides examples like 'create campaigns, generate AI images/videos, upload to YouTube', which gives a general sense of purpose. However, it's vague about what 'Synter action' means and doesn't clearly distinguish this from sibling tools like 'execute_campaign_plan' or 'run_gaql_query', which seem to perform specific executions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating 'Use list_available_scripts to see all actions', which gives a prerequisite for discovering available actions. It implies this is a general-purpose execution tool, but it doesn't explicitly state when to use this versus more specific sibling tools like 'execute_campaign_plan' or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
execute_campaign_planAInspect
Launch an approved campaign plan — activates all entities across platforms in dependency order (10 credits).
| Name | Required | Description | Default |
|---|---|---|---|
| plan_id | Yes | ||
| execute_token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it 'activates all entities across platforms in dependency order' (indicating a complex, sequential execution) and mentions '10 credits' (implying a cost or resource usage). However, it lacks details on permissions, rate limits, or what 'activates' entails (e.g., irreversible changes).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that is front-loaded with the core action ('Launch an approved campaign plan') and includes essential details (activation scope and cost). There is no wasted verbiage, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (activating entities across platforms with dependencies) and the presence of an output schema (which handles return values), the description is moderately complete. It covers the action and cost but lacks details on parameters, error conditions, or behavioral nuances like rollback options, which are important for such a significant operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It does not explain the parameters 'plan_id' or 'execute_token' at all, leaving their semantics unclear. The mention of 'approved campaign plan' hints at 'plan_id' but provides no format or validation details, and 'execute_token' is entirely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Launch') and resource ('an approved campaign plan'), and distinguishes it from siblings by specifying it 'activates all entities across platforms in dependency order'. This is precise and differentiates from tools like 'create_campaign_plan' or 'enable_campaign'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'approved campaign plan' and '10 credits', suggesting prerequisites and cost. However, it does not explicitly state when to use this tool versus alternatives like 'enable_campaign' or 'publish_plan_document', leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forecast_campaignBInspect
Forecast campaign KPIs (spend, CPA, ROAS, clicks, conversions) for 7-30 days with confidence intervals (2 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| metric | No | spend | |
| horizon | No | ||
| platform | No | ||
| campaign_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the time horizon (7-30 days), confidence intervals, and credit cost (2 credits), which are useful behavioral traits. However, it doesn't mention permissions needed, rate limits, whether it's read-only or mutative, or error handling, leaving gaps for a forecasting tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information (forecast KPIs, time range, confidence intervals, credit cost) with zero waste. Every element earns its place, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (forecasting with multiple parameters) and no annotations, the description is moderately complete: it covers purpose and some behavior but lacks parameter details and usage guidelines. The presence of an output schema reduces the need to explain return values, but overall gaps remain for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only implies 'metric' and 'horizon' parameters by mentioning KPIs and 7-30 days, but doesn't explain 'platform' or 'campaign_id' at all. This leaves two of four parameters undocumented, failing to adequately supplement the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool forecasts campaign KPIs with specific metrics (spend, CPA, ROAS, clicks, conversions) and a time horizon (7-30 days), providing a specific verb ('forecast') and resource ('campaign KPIs'). However, it doesn't explicitly differentiate from sibling tools like 'optimize_budget' or 'measure_incrementality' which might involve similar campaign analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions '2 credits' which hints at resource usage, but doesn't specify prerequisites, ideal scenarios, or exclusions. Given sibling tools like 'get_credit_balance' and various performance-pulling tools, clearer context would help.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ga4_list_conversionsCInspect
List GA4 conversion events (free)
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool is 'free', hinting at no cost implications, but lacks details on permissions, rate limits, pagination, or response format. For a list operation with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single phrase with no wasted words. It front-loads the core purpose ('List GA4 conversion events') and adds a useful note ('free') without unnecessary elaboration, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter) and the presence of an output schema (which reduces the need to describe return values), the description is minimally adequate. However, with no annotations and low schema coverage, it lacks details on behavioral traits and parameter semantics, leaving room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no parameter information beyond what the input schema provides. With 0% schema description coverage and one parameter ('account_id'), the description fails to explain its purpose, format, or optionality. This is inadequate given the low schema coverage, as it doesn't compensate for the lack of structured documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('GA4 conversion events'), specifying what the tool does. It distinguishes itself from siblings like 'ga4_run_report' by focusing on conversion events rather than general reporting. However, it doesn't explicitly differentiate from 'ga4_list_properties', which lists a different resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'ga4_list_properties' for listing properties or 'ga4_run_report' for detailed analytics, nor does it specify prerequisites or contexts for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ga4_list_propertiesBInspect
List your Google Analytics 4 properties (free)
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool lists properties but lacks behavioral details like whether it requires authentication, how results are paginated, what the output format is, or if there are rate limits. The mention '(free)' hints at scope but is vague.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It is front-loaded with the core purpose, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (list operation with one optional parameter) and the presence of an output schema (which handles return values), the description is minimally adequate. However, without annotations and with incomplete parameter documentation, it lacks depth for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter ('account_id') with 0% description coverage. The tool description does not mention or explain this parameter, so it adds no semantic value beyond the schema. With one parameter and low coverage, the baseline is 3, as the description fails to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('Google Analytics 4 properties'), with the qualifier '(free)' indicating scope. However, it does not explicitly differentiate from sibling tools like 'ga4_list_conversions' or 'ga4_run_report', which are also GA4-related but serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as other GA4 tools (e.g., 'ga4_list_conversions' for conversions or 'ga4_run_report' for reports). There is no mention of prerequisites, context, or exclusions, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ga4_run_reportCInspect
Run a Google Analytics 4 report (free)
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| metrics | No | sessions,totalUsers,conversions | |
| account_id | No | ||
| dimensions | No | date |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Run a Google Analytics 4 report (free)', which implies a read operation but doesn't clarify if it's read-only, what permissions are needed, rate limits, or output format. The '(free)' hint suggests no cost, but this is minimal context, leaving significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with a single phrase, 'Run a Google Analytics 4 report (free)', which is front-loaded and wastes no words. Every part of it contributes to the core message, making it efficient in structure, though it lacks detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which reduces the need to describe return values) but no annotations and 0% schema coverage for parameters, the description is incomplete. It provides a basic purpose but misses usage guidelines, parameter details, and behavioral context, making it adequate only at a minimal level for a report-running tool with structured output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description adds no information about parameters like 'days', 'metrics', 'account_id', or 'dimensions', failing to compensate for the coverage gap. It doesn't explain what these parameters mean or how to use them, making it hard for an agent to invoke the tool correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Run a Google Analytics 4 report (free)' states the action ('Run') and resource ('Google Analytics 4 report'), which is clear but vague. It doesn't specify what kind of report (e.g., standard vs. custom) or distinguish it from sibling tools like 'ga4_list_conversions' or 'ga4_list_properties', leaving ambiguity about its specific function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions '(free)', which might imply a cost-free option, but it doesn't explain if this is for basic reports, when to choose it over other GA4 tools, or any prerequisites. Without explicit when/when-not instructions, usage is unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_attributionCInspect
Multi-touch attribution analysis using Markov chains (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| date_range | No | last_30_days | |
| conversion_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '5 credits' which indicates a cost/consumption behavior, but doesn't describe what the tool actually does beyond 'analysis' - no information about what data it accesses, what permissions are needed, whether it's read-only or mutating, rate limits, or what the analysis output entails. For a tool with no annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence with zero wasted words. It's front-loaded with the core purpose and includes the credit cost as additional context. Every element earns its place in this minimal description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which handles return values) and only 2 parameters, the description covers the basic purpose and cost. However, for an analytics tool with no annotations and 0% schema coverage, it should provide more context about what the analysis actually does, what data sources it uses, and how parameters affect results. The presence of an output schema helps but doesn't fully compensate for the lack of behavioral and parameter context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. The description provides no information about the two parameters (date_range, conversion_type) - it doesn't explain what date ranges are valid, what conversion types are supported, or how these parameters affect the analysis. With 0% schema coverage and no parameter guidance in the description, this represents a significant gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Multi-touch attribution analysis using Markov chains' specifies both the action (analysis) and method (Markov chains). However, it doesn't differentiate from sibling tools like 'measure_incrementality' or 'ga4_run_report' which might also involve analytics, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions '5 credits' which hints at cost, but doesn't explain when this specific attribution method is preferred over other analytics tools in the sibling list. No explicit when/when-not instructions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_connection_statusBInspect
Check ALL platform connections: ad platforms, analytics (GA4, PostHog), CRM (HubSpot, Attio), and more (free)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'Check ALL platform connections' which implies a read-only operation, but doesn't specify whether this requires authentication, what format the results come in, whether it's paginated, or what happens if connections are down. The '(free)' notation is ambiguous and adds little behavioral clarity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but could be more structured. The parenthetical '(free)' feels tacked on without clear meaning. While concise, it could be more front-loaded with the core purpose and better organized to explain what 'free' means in this context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is reasonably complete for a simple status-checking tool. However, it lacks important context about authentication requirements, result format, and how it differs from similar sibling tools, which would be helpful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, though it could have mentioned that no inputs are required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check ALL platform connections' with specific examples of platform types (ad platforms, analytics, CRM). It uses a specific verb ('Check') and resource ('platform connections'), but doesn't explicitly differentiate from sibling tools like 'list_connected_accounts' which might serve a similar purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'free' but doesn't explain what this means in context. There's no mention of prerequisites, timing considerations, or comparison to sibling tools like 'list_connected_accounts' that might overlap in functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_credit_balanceBInspect
Check your credit balance and pricing (free)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool checks balance and pricing, implying a read-only operation, but doesn't disclose behavioral traits such as authentication requirements, rate limits, data freshness, or what 'pricing' entails. The mention of 'free' adds some context about cost, but overall behavioral disclosure is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Check your credit balance and pricing (free)'. It is front-loaded with the core purpose and includes a useful qualifier without any wasted words. Every part of the sentence adds value, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is somewhat complete for a simple read operation. However, with no annotations and sibling tools including various financial and status checks, it lacks details on authentication, data scope, or integration context that could help the agent use it correctly in this environment.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameters need documentation. The description doesn't add parameter details, which is appropriate here. A baseline of 4 is given as it compensates adequately for the lack of parameters by not introducing unnecessary information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check your credit balance and pricing (free)'. It specifies the action ('Check'), the resource ('credit balance and pricing'), and includes a helpful qualifier ('free'). However, it doesn't explicitly differentiate from sibling tools, which include various campaign, audience, and analytics tools, but none appear to directly overlap with credit balance checking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'free' which might imply a cost context, but doesn't specify prerequisites, timing, or contrast with other tools like 'get_connection_status' or financial-related siblings. This leaves the agent without explicit usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_job_statusAInspect
Check the status of an async job (e.g. audience sync). Returns job status, result on success, or error on failure. Free - no credits charged.
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and adds valuable behavioral context: it discloses the return structure ('Returns job status, result on success, or error on failure') and explicitly states 'Free - no credits charged,' which is important cost/rate limit information. It doesn't mention authentication requirements or potential side effects, but provides more than minimal behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and return values, the second adds important cost information. Every phrase adds value with zero wasted words, and the most critical information (what the tool does) comes first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (status checking with 1 parameter), no annotations, but with an output schema present, the description provides good coverage: it explains the purpose, return structure, and cost implications. The output schema will handle return value details, so the description appropriately focuses on behavioral context. It could benefit from more parameter guidance but is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, providing only parameter names without explanations. The description doesn't mention the 'job_id' parameter at all, offering no additional semantic information about what constitutes a valid job ID or where to obtain it. However, with only 1 parameter, the baseline is higher, and the tool's purpose inherently implies the parameter's role.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check the status of an async job (e.g. audience sync).' It specifies the verb ('check') and resource ('async job'), and provides an example ('audience sync') for context. However, it doesn't explicitly differentiate from sibling tools, though most siblings appear to be different operations rather than status-checking alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'async job' and providing an example ('audience sync'), suggesting it should be used for monitoring previously initiated operations. However, it doesn't explicitly state when to use this tool versus alternatives or provide exclusions. The sibling list shows many tools that might create async jobs, but no guidance is given about which ones require status checking.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_plan_executionCInspect
Get execution status and per-entity step results for a campaign plan launch (free).
| Name | Required | Description | Default |
|---|---|---|---|
| plan_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions retrieving 'execution status and per-entity step results,' which implies a read-only operation, but doesn't specify authentication requirements, rate limits, error conditions, or what 'per-entity step results' entails. The '(free)' hint suggests cost implications but lacks detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. The '(free)' addition is brief but potentially useful. There's no wasted verbiage, though it could be more structured with separate usage notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values) and a simple input schema with one parameter, the description is moderately complete. It covers the basic purpose but lacks details on behavioral aspects like authentication, error handling, or specific usage scenarios, which are important for a status-checking tool in a campaign context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the description must compensate. It mentions 'campaign plan launch' which contextually relates to 'plan_id,' but doesn't explicitly explain the parameter's format, constraints, or where to obtain it. The description adds some meaning but doesn't fully document the single parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get execution status and per-entity step results for a campaign plan launch.' It specifies the verb ('Get'), resource ('execution status and per-entity step results'), and context ('campaign plan launch'). However, it doesn't explicitly differentiate from sibling tools like 'get_job_status' or 'execute_campaign_plan', which could have overlapping functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions the context ('campaign plan launch') and includes '(free)' which might imply cost considerations, but it doesn't specify when to use this tool versus alternatives like 'get_job_status' or 'execute_campaign_plan'. No explicit when-not-to-use or prerequisite information is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
growth_discoverCInspect
Discover ICP prospects via Apollo (ad audience) or hiring companies via Sumble (outreach), plus listicle/podcast placements. (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | Head of Growth | |
| discover_type | No | all | |
| apollo_keywords | No | ||
| apollo_max_employees | No | ||
| apollo_min_employees | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the credit cost ('5 credits'), which is useful context, but doesn't describe what the tool actually returns, whether it's a read-only operation, potential rate limits, or how results are formatted. For a 6-parameter tool with no annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in a single sentence that covers the main purpose and credit cost. Every element earns its place, though it could be more front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 6 parameters with 0% schema description coverage, no annotations, and sibling tools that might overlap (like 'growth_enrich'), the description is insufficient. While an output schema exists, the description doesn't explain what kind of data is returned or how the discovery process works, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate by explaining parameters. It mentions 'Apollo' and 'Sumble' which relate to some parameters, but doesn't explain what 'discover_type', 'limit', 'query', or the employee range parameters mean. The description adds minimal value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: discovering ICP prospects via Apollo or hiring companies via Sumble, plus listicle/podcast placements. It specifies the action ('discover') and resources (prospects/companies/placements), though it doesn't explicitly differentiate from sibling tools like 'growth_enrich' or 'list_audiences'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions two discovery methods (Apollo for ICP prospects, Sumble for hiring companies) and additional placements, but provides no guidance on when to use this tool versus alternatives like 'growth_enrich' or 'list_audiences'. It lacks explicit when/when-not instructions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
growth_enrichBInspect
Enrich a domain with competitive intelligence — SpyFu PPC data, BuiltWith tech stack, Hunter emails, Firecrawl headline. (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the credit cost, which is useful context about resource consumption. However, it doesn't describe what the enrichment process entails (e.g., is it synchronous/asynchronous, what permissions are needed, whether it makes external API calls, what happens on failure, or the format of returned data). For a tool that presumably aggregates data from multiple external sources, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that packs the purpose, scope (competitive intelligence), specific data sources, and cost. Every word earns its place with zero waste. It's front-loaded with the core action ('Enrich a domain') followed by clarifying details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating data from multiple external sources), no annotations, and an output schema (which presumably handles return values), the description is minimally adequate. It covers the purpose and cost but lacks crucial behavioral details like execution mode, error handling, or data freshness. The presence of an output schema reduces the need to describe return values, but other gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context about what the 'domain' parameter is used for: enrichment with competitive intelligence from specific data sources. With 0% schema description coverage (the schema only says 'Domain' with no details), this compensates well. However, it doesn't specify format requirements (e.g., should it include 'http://', is subdomain allowed) or validation rules, leaving some ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Enrich a domain with competitive intelligence' followed by specific data sources (SpyFu PPC, BuiltWith tech stack, Hunter emails, Firecrawl headline). This provides a specific verb ('enrich') and resource ('domain') with concrete examples of what enrichment entails. However, it doesn't explicitly differentiate from sibling tools like 'growth_discover' or 'similarweb_analyze_domain', which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it mentions the credit cost '(5 credits)', it doesn't specify prerequisites, appropriate contexts, or when to choose this over similar tools like 'growth_discover' or 'similarweb_analyze_domain'. The agent must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
growth_run_pipelineCInspect
Run the full growth pipeline — discover leads, enrich, generate outreach. (10 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | discover | |
| limit | No | ||
| query | No | Head of Growth | |
| channel | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions credit cost (10 credits), indicating resource consumption, but doesn't disclose other critical traits like execution time, side effects, permissions needed, rate limits, or what 'run the full growth pipeline' entails operationally. The description is too vague about the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (one sentence plus credit note) and front-loaded with the core purpose. However, the credit note in parentheses feels tacked on rather than integrated, and the description could be more structured to separate purpose from cost.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multi-step pipeline with 4 parameters) and lack of annotations, the description is incomplete. It states the high-level purpose and credit cost but omits crucial details about behavior, parameters, and output. The presence of an output schema helps, but the description doesn't leverage it to explain what results to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 4 parameters have descriptions in the schema. The tool description adds no parameter semantics—it doesn't explain what 'mode', 'limit', 'query', or 'channel' mean, their allowed values, or how they affect pipeline execution. This leaves parameters completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('discover leads, enrich, generate outreach') and identifies the resource ('growth pipeline'). It distinguishes from siblings like 'growth_discover' and 'growth_enrich' by indicating it runs the 'full' pipeline, but doesn't explicitly contrast with other marketing tools like 'create_campaign_for_audience'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing considerations, or compare with sibling tools like 'growth_discover' or 'growth_enrich' for partial workflows. The credit cost ('10 credits') hints at resource consumption but doesn't constitute usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_audiencesCInspect
List existing audiences on an ad platform (1 credit)
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | ||
| account_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a credit cost ('1 credit'), which is useful for understanding resource implications, but lacks details on permissions, rate limits, pagination, or what 'list' entails (e.g., format, completeness). This leaves significant gaps for a tool with mutation-related siblings.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It front-loads the core purpose and includes a practical detail (credit cost) without unnecessary elaboration, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 parameters, no nested objects) and the presence of an output schema (which handles return values), the description is minimally adequate. However, with no annotations and 0% schema coverage, it should provide more context on parameters and behavior to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description adds no information about the 'platform' or 'account_id' parameters, such as valid values (e.g., 'meta', 'google_ads') or how 'account_id' affects the listing. This fails to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('existing audiences on an ad platform'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'sync_audience' or 'build_lookalike_audience', which also involve audiences but with different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'sync_audience' or 'list_campaigns', nor does it mention prerequisites or context for usage. The credit cost note hints at resource usage but doesn't inform decision-making between tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_available_scriptsBInspect
See all available PPC scripts (free)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states 'see all available PPC scripts (free)', implying a read-only operation that returns a list, but lacks details on pagination, format, authentication needs, rate limits, or what 'free' entails beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action and resource, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters, 100% schema coverage, and an output schema exists, the description is minimally adequate. However, as a list tool with no annotations, it lacks behavioral context like pagination or filtering options, leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate, earning a baseline score of 4 for this dimension.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'see' and the resource 'available PPC scripts', specifying they are 'free'. It distinguishes from siblings like list_audiences or list_campaigns by focusing on scripts, but doesn't explicitly differentiate from other list tools beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage based on the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_campaignsCInspect
List campaigns for any ad platform (1 credit)
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | ||
| platform | Yes | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions '1 credit' indicating cost/rate limiting, which is valuable behavioral context. However, it doesn't disclose whether this is a read-only operation, what permissions are needed, pagination behavior, or what happens when parameters are omitted. The description adds minimal behavioral information beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that states the core purpose and includes cost information. Every word earns its place with no redundancy or unnecessary elaboration. The structure is front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 4 parameters with 0% schema coverage, and no annotations, the description is minimally adequate. It states what the tool does and includes cost information, but fails to explain parameters or provide sufficient behavioral context for a listing tool that works across multiple platforms.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 4 parameters have descriptions in the schema. The tool description provides no information about what 'platform', 'status', 'account_id', or 'account_name' mean, their expected formats, or how they affect the listing. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'campaigns', specifying scope as 'for any ad platform'. It distinguishes from sibling tools like 'tiktok_ads_get_campaign' (platform-specific) and 'create_campaign_for_audience' (creation vs listing). However, it doesn't explicitly differentiate from 'list_audiences' or other list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like platform-specific campaign tools (e.g., 'tiktok_ads_get_campaign') or other listing tools. It mentions '1 credit' which hints at cost but doesn't provide usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_connected_accountsBInspect
See ALL connected accounts: ad platforms, analytics (GA4, PostHog), CRM (HubSpot, Attio), and more (free)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states 'See ALL connected accounts' which implies a read-only operation, but doesn't mention any behavioral traits like permissions needed, rate limits, pagination, or what 'ALL' means in practice. The '(free)' note hints at cost but doesn't clarify behavioral implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise - a single sentence that efficiently conveys the core purpose with examples. It's front-loaded with the main action ('See ALL connected accounts') followed by clarifying examples. No wasted words, though the '(free)' parenthetical could be more clearly integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a zero-parameter tool with an output schema (which handles return values), the description provides adequate context for a simple listing operation. However, with no annotations and multiple sibling tools that might overlap (like 'get_connection_status'), more guidance on differentiation would improve completeness. The description covers the what but not the when or how.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the parameter situation. The description appropriately doesn't waste space discussing parameters that don't exist. A baseline of 4 is appropriate for zero-parameter tools where the schema handles all parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'See ALL connected accounts' with specific examples of account types (ad platforms, analytics, CRM). It uses a specific verb ('see') and resource ('connected accounts'), though it doesn't explicitly differentiate from sibling tools like 'get_connection_status' which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions '(free)' which might imply cost considerations, but doesn't specify when this tool is appropriate compared to siblings like 'get_connection_status' or other listing tools. No explicit when/when-not instructions or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_skillsBInspect
List all available Synter skills with descriptions (free)
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states it's a list operation. It doesn't disclose behavioral traits like pagination, rate limits, authentication needs, or whether it returns structured data. The '(free)' hint adds minor context but insufficient for a mutation-free tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. The parenthetical '(free)' could be integrated more smoothly, but overall it's appropriately sized with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with 0 parameters and an output schema, the description is minimally adequate. However, without annotations and with many complex siblings, it could better address behavioral context like response format or integration with 'load_skill'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, earning a baseline high score for not adding unnecessary information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all available Synter skills') and resource ('Synter skills'), distinguishing it from siblings by focusing on skill enumeration rather than audience, campaign, or document operations. It adds the qualifier 'with descriptions (free)' which further clarifies scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description doesn't mention prerequisites, timing, or how it relates to sibling tools like 'load_skill' or 'list_available_scripts', leaving usage context implied at best.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
load_skillBInspect
Load detailed instructions for a specific skill (free)
| Name | Required | Description | Default |
|---|---|---|---|
| skill_slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool is 'free', which hints at no cost, but doesn't cover other critical aspects: whether it's read-only or mutative, authentication requirements, rate limits, error handling, or what 'detailed instructions' entail. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single, clear sentence with no wasted words. It's front-loaded with the core purpose and includes a helpful qualifier ('free'). Every part of the sentence earns its place, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter) and the presence of an output schema (which should cover return values), the description is minimally adequate. However, with no annotations and incomplete parameter semantics, it lacks details on usage context, behavioral traits, and prerequisites. It meets a basic threshold but leaves room for improvement in guiding an agent effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the schema provides no semantic context. The description doesn't add any parameter-specific information beyond implying a 'skill_slug' is needed. It doesn't explain what a skill slug is, its format, or where to obtain it. Baseline is 3 due to the single parameter, but the description fails to compensate for the schema's lack of detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Load detailed instructions for a specific skill (free)'. It specifies the verb ('load'), resource ('detailed instructions for a specific skill'), and includes a cost qualifier ('free'), which is helpful. However, it doesn't explicitly differentiate from sibling tools like 'list_skills', which might list skills without loading detailed instructions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a skill slug from 'list_skills'), exclusions, or compare it to other tools like 'execute' or 'list_skills'. This lack of context makes it unclear when an agent should select this tool over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
measure_incrementalityCInspect
Measure incremental ad impact via geo-lift or synthetic control (10 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| method | No | geo_lift | |
| platform | Yes | ||
| test_regions | No | ||
| control_regions | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the cost ('10 credits'), which is useful context, but fails to describe other critical behaviors: whether this is a read or write operation, expected runtime, error handling, or output format. For a tool with 4 parameters and no annotations, this leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the core purpose in a single sentence, followed by cost information. There's no wasted text, and it efficiently communicates key details without unnecessary elaboration. However, it could be slightly improved by integrating parameter hints, but it's already well-structured for its brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 parameters, 1 required, no annotations) and the presence of an output schema, the description is incomplete. It lacks parameter explanations, usage context, and behavioral details beyond cost. While the output schema might cover return values, the description doesn't provide enough guidance for the agent to understand when and how to use this tool effectively, especially compared to siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 4 parameters have descriptions in the schema. The tool description doesn't mention any parameters or their meanings, such as what 'platform', 'method', 'test_regions', or 'control_regions' represent. This forces the agent to guess based on parameter names alone, which is inadequate for proper tool invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Measure incremental ad impact via geo-lift or synthetic control.' It specifies the action (measure), resource (incremental ad impact), and methods (geo-lift or synthetic control). However, it doesn't explicitly differentiate from sibling tools like 'get_attribution' or 'forecast_campaign,' which might also measure ad performance, so it doesn't reach a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance: it mentions the cost ('10 credits') but doesn't specify when to use this tool versus alternatives like 'get_attribution' or 'forecast_campaign.' There's no context on prerequisites, timing, or exclusions, leaving the agent with little direction on appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
optimize_budgetCInspect
Cross-channel budget allocation using diminishing returns modeling (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| target | No | conversions | |
| constraints | No | ||
| total_budget | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '5 credits' which hints at a cost/usage limitation, but doesn't explain what this means operationally. It doesn't disclose whether this is a read-only or mutation operation, what permissions are needed, or what side effects might occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with just one phrase that communicates the core functionality. The credit cost information is efficiently appended. However, it could be more front-loaded with clearer purpose before the credit mention.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with 0% schema coverage and no annotations, the description is inadequate. While an output schema exists, the description doesn't provide enough context about what the tool actually does, how to use it properly, or what to expect from the optimization process. The credit mention adds some context but leaves major gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description mentions 'diminishing returns modeling' but doesn't explain how this relates to the three parameters (target, constraints, total_budget). It doesn't clarify what 'target' refers to, what format 'constraints' should take, or how 'total_budget' interacts with the modeling.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'cross-channel budget allocation using diminishing returns modeling', which is a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'update_campaign_budget' or 'forecast_campaign', which might have overlapping functionality in budget management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, constraints, or comparison to sibling tools like 'update_campaign_budget' or 'forecast_campaign' that might handle budget-related tasks differently.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pause_campaignCInspect
Pause a campaign on any ad platform (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | ||
| account_id | No | ||
| campaign_id | Yes | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the credit cost (5 credits), which is useful operational context, but fails to describe what 'pause' actually means behaviorally (does it stop spending immediately? preserve settings? require reactivation?), what permissions are needed, or what the response looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that states the core action and includes the credit cost. There's zero wasted verbiage, and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 4 parameters, 0% schema coverage, no annotations, but with an output schema, the description is insufficient. It covers the basic action and cost but lacks critical information about parameter meanings, behavioral effects, and usage context that would help an agent invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 4 parameters, the description provides no information about what parameters are needed or their meaning. It doesn't mention platform, campaign_id, account_id, or account_name at all, leaving the agent to infer everything from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Pause') and resource ('a campaign on any ad platform'), making the purpose immediately understandable. However, it doesn't differentiate this tool from potential alternatives like 'disable_campaign' or explain how it differs from sibling tools like 'enable_campaign' beyond the obvious action reversal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites, or constraints beyond the credit cost. It doesn't mention when pausing is appropriate versus other campaign management actions available in the sibling tool list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_landing_pageBInspect
Publish a landing page draft, making it live at syntermedia.ai/lp/{slug} (free)
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It states the tool publishes a draft and makes it live, implying a mutation operation, but lacks details on permissions, reversibility, rate limits, or what happens to existing pages. The mention 'free' hints at no cost, but this is vague without context on pricing or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and outcome, with no wasted words. It directly communicates the tool's function and key details (URL structure and cost) in a compact form.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values) and a simple input schema with one parameter, the description covers the basic purpose and URL outcome adequately. However, as a mutation tool with no annotations, it lacks behavioral context like error handling or side effects, making it minimally complete but with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that the 'slug' parameter determines the URL path ('syntermedia.ai/lp/{slug}'), which clarifies its purpose beyond the schema's basic type and title. However, it doesn't detail slug format constraints or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Publish') and resource ('landing page draft'), specifying the outcome ('making it live at syntermedia.ai/lp/{slug}') and noting it's free. It distinguishes from siblings like 'create_landing_page' and 'update_landing_page_html' by focusing on publication rather than creation or editing, though it doesn't explicitly name these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description implies it's for publishing drafts, but it doesn't specify prerequisites (e.g., requiring a draft created via 'create_landing_page'), exclusions, or comparisons to other tools like 'publish_plan_document'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
publish_plan_documentBInspect
Publish a campaign plan for review, generating a shareable URL (2 credits).
| Name | Required | Description | Default |
|---|---|---|---|
| plan_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses key behavioral traits: the tool is a mutation (publish), generates a shareable URL, and has a cost (2 credits). However, it lacks details on permissions, whether the action is reversible, rate limits, or what 'publish' entails (e.g., makes plan read-only). The credit cost is a valuable addition beyond basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('publish a campaign plan') and adds two critical details (purpose 'for review' and outcome 'shareable URL' with cost). Every word earns its place with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a mutation with cost), no annotations, and an output schema (which handles return values), the description is minimally complete. It covers the what and outcome but lacks context on prerequisites, side effects, or error conditions. The credit cost hint is a positive addition.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It doesn't explicitly mention the 'plan_id' parameter, but the phrase 'publish a campaign plan' strongly implies a plan identifier is needed. For a single required parameter with intuitive semantics, this is adequate, though not explicit about format or sourcing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('publish') and resource ('campaign plan'), and mentions the outcome ('generating a shareable URL'). It doesn't explicitly distinguish from sibling tools like 'publish_landing_page' or 'create_document', but the focus on 'campaign plan' provides reasonable differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: it implies usage when a campaign plan needs to be published for review, but offers no explicit when/when-not instructions, prerequisites (e.g., plan must be in draft state), or alternatives (e.g., vs. 'create_document' or 'publish_landing_page'). The credit cost hint is useful but insufficient for comprehensive guidelines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_amazon_dsp_performanceCInspect
Pull Amazon DSP campaign performance data (1 credit)
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '1 credit' which implies a cost or resource usage, adding some context. However, it doesn't describe what 'pull' entails (e.g., is it a one-time fetch, real-time data, cached results?), authentication needs, rate limits, data freshness, or what happens if parameters are omitted. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that states the core purpose and includes cost information. Every word earns its place with no redundancy or fluff. It's front-loaded with the essential action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (according to context signals), the description doesn't need to explain return values. However, for a data retrieval tool with 3 parameters (all undocumented in schema), no annotations, and siblings that include similar performance-pulling tools, the description is incomplete. It covers the basic 'what' but lacks parameter guidance, behavioral context, and differentiation from alternatives that would help an agent use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description doesn't mention any parameters at all, failing to compensate for the schema gap. It doesn't explain what 'days', 'account_id', or 'account_name' mean, their relationships, or how they affect the data pull. With 3 undocumented parameters, the description adds no semantic value beyond the tool name.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('pull') and resource ('Amazon DSP campaign performance data'), making the purpose specific and understandable. It distinguishes from siblings like pull_google_ads_performance by specifying the platform (Amazon DSP). However, it doesn't explicitly differentiate from other data retrieval tools in terms of scope or granularity beyond the platform mention.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions '1 credit' which hints at a cost implication, but doesn't explain when this tool is appropriate compared to other performance-pulling tools (e.g., pull_google_ads_performance) or general data tools like execute. No prerequisites, exclusions, or comparative context are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_google_ads_performanceCInspect
Get Google Ads campaign metrics (1 credit). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| level | No | campaigns | |
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '1 credit' which hints at a cost/rate-limiting aspect, but doesn't describe what 'credit' means, authentication needs, whether this is a read-only operation, what happens on failure, or the format/scope of returned metrics. For a data retrieval tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief with two sentences that each serve a purpose: stating the core function and providing a usage tip. It's front-loaded with the main action. However, the parenthetical '(1 credit)' could be better integrated into the sentence structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage, no annotations, but an output schema exists, the description is moderately complete. The output schema reduces the need to describe return values, but the description should still explain parameter purposes and behavioral context more thoroughly for a data retrieval tool with multiple configuration options.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only mentions 'account_name' for disambiguation, ignoring 'days', 'level', and 'account_id'. The description adds minimal value beyond the schema, failing to explain what metrics are retrieved, what 'level' means, or how 'days' affects the data range.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'Google Ads campaign metrics', making the purpose specific and understandable. It distinguishes from some siblings like 'pull_meta_ads_performance' by specifying the platform, but doesn't differentiate from other performance tools like 'pull_amazon_dsp_performance' beyond the platform name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance about using 'account_name to specify which account when multiple are connected', but offers no explicit when-to-use criteria, no when-not-to-use warnings, and no alternatives to this tool versus other performance tools or campaign listing tools. It lacks context about prerequisites or timing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_linkedin_ads_performanceBInspect
Get LinkedIn Ads metrics (1 credit). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '1 credit' which hints at a cost/rate-limiting system, but doesn't describe what metrics are returned, time granularity, format, error conditions, or authentication needs. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two sentences) and front-loaded with the core purpose. However, the second sentence could be more efficiently integrated, and some essential information is missing that would justify additional length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), the description's main gaps are behavioral transparency and parameter semantics. For a data retrieval tool with 3 parameters and no annotations, the description should provide more context about what metrics are retrieved, time ranges, and multi-account handling to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only mentions 'account_name' parameter usage for multi-account scenarios, ignoring 'days' and 'account_id' parameters entirely. The description adds minimal value beyond what the schema's property names already imply, leaving most parameter semantics undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get LinkedIn Ads metrics' specifies both the verb (get) and resource (LinkedIn Ads metrics). It distinguishes from most siblings that focus on other platforms (e.g., pull_google_ads_performance) but doesn't explicitly differentiate from other LinkedIn-related tools if they existed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Use account_name to specify which account when multiple are connected' implies this tool is for multi-account scenarios. However, it doesn't specify when to use this vs. other performance tools (like pull_meta_ads_performance) or mention prerequisites like authentication requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_meta_ads_performanceBInspect
Get Meta (Facebook/Instagram) Ads metrics (1 credit). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| level | No | campaign | |
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the credit cost ('1 credit'), which is useful operational context. However, it doesn't describe what the tool returns (though an output schema exists), potential rate limits, error conditions, or authentication requirements. The description is minimal and lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and to the point with two sentences. It front-loads the core purpose and includes operational details efficiently. However, the second sentence could be more integrated, and there's room for slightly more detail without sacrificing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values) and no annotations, the description is moderately complete. It covers the basic purpose and one parameter hint but misses behavioral context like error handling or performance characteristics. For a data retrieval tool with four parameters, it should provide more guidance on parameter usage and constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description only mentions the 'account_name' parameter to specify accounts when multiple are connected, adding minimal semantics. It doesn't explain the purpose of 'days', 'level', or 'account_id', leaving three of four parameters undocumented in both schema and description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('Meta (Facebook/Instagram) Ads metrics'), making the purpose understandable. It specifies the type of metrics (performance metrics) and mentions the credit cost. However, it doesn't explicitly differentiate from sibling tools like 'pull_google_ads_performance' or 'pull_tiktok_ads_performance' beyond the platform name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by mentioning 'Use account_name to specify which account when multiple are connected,' which gives guidance on when to use the account_name parameter. However, it doesn't explain when to use this tool versus alternatives like other ad platform performance tools, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_microsoft_ads_performanceBInspect
Get Microsoft Ads metrics (1 credit). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '1 credit' which hints at a cost/rate-limiting aspect, but doesn't describe what metrics are returned, time ranges, data freshness, error conditions, or authentication requirements. For a data retrieval tool with no annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two sentences) and front-loaded with the core purpose. The credit cost is efficiently noted parenthetically. However, the second sentence could be more integrated with the first for better flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 3 parameters with 0% schema coverage, and no annotations, the description is minimally adequate. It states the purpose and hints at account selection, but doesn't fully address parameter meanings, behavioral traits, or differentiation from similar tools, leaving room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'account_name' parameter usage but doesn't explain the 'days' parameter (default 7) or 'account_id' parameter, nor their relationships. The description adds minimal value beyond what's already evident from parameter names in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get Microsoft Ads metrics' specifies both the action (get) and resource (Microsoft Ads metrics). It distinguishes from some siblings like 'pull_google_ads_performance' by specifying the platform, though it doesn't explicitly differentiate from all similar 'pull_*_performance' tools beyond the platform name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'Use account_name to specify which account when multiple are connected' implies this tool is for selecting among connected accounts. However, it doesn't specify when to use this tool versus alternatives like 'list_connected_accounts' or other 'pull_*_performance' tools, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_reddit_ads_performanceBInspect
Get Reddit Ads metrics (1 credit). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions '1 credit' which hints at a cost/rate limit aspect, adding some value. However, it doesn't describe what 'metrics' include, time granularity, whether it's a read-only operation, error conditions, or response format, leaving significant behavioral gaps for a data retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences that each serve a clear purpose: stating the tool's function and providing a key usage tip. There's no wasted verbiage, and information is front-loaded appropriately for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), the description's main gaps are parameter documentation and behavioral context. While the purpose is clear and conciseness is excellent, the lack of parameter explanations and incomplete behavioral transparency for a data retrieval tool with no annotations makes this description minimally adequate but with clear room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only mentions 'account_name' parameter usage for multi-account scenarios, ignoring 'days' and 'account_id' parameters entirely. This leaves two of three parameters undocumented, failing to add sufficient meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get Reddit Ads metrics' specifies both the verb ('Get') and resource ('Reddit Ads metrics'), making it unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'pull_google_ads_performance' or 'pull_meta_ads_performance' beyond mentioning Reddit specifically, which is inherent to the tool name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage guidance: 'Use account_name to specify which account when multiple are connected' implies this tool should be used for Reddit Ads performance metrics and helps with account selection. However, it doesn't specify when to use this versus alternatives (e.g., other ad platform pull tools) or any prerequisites like authentication needs, leaving usage context partially implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_tiktok_ads_performanceBInspect
Get TikTok Ads metrics (1 credit). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions '1 credit' which hints at a cost/rate limit, which is valuable. However, it doesn't describe what 'metrics' are returned, whether this is a read-only operation, potential side effects, error conditions, or performance characteristics. For a data retrieval tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that each serve a clear purpose: the first states the core functionality and cost, the second provides specific parameter guidance. There's zero wasted verbiage, and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no annotations, but with an output schema), the description is minimally adequate. The output schema existence means return values don't need explanation in the description, but the description should still cover more about what 'metrics' means, time range defaults, and account selection logic to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description only mentions 'account_name' to specify accounts when multiple are connected, ignoring 'days' and 'account_id' parameters entirely. It adds minimal semantic value beyond what's inferable from parameter names, failing to compensate for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get TikTok Ads metrics' specifies both the verb ('Get') and resource ('TikTok Ads metrics'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'tiktok_ads_get_insights' or 'pull_meta_ads_performance', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context with 'Use account_name to specify which account when multiple are connected,' which implies when to use the parameter. However, it lacks explicit guidance on when to choose this tool over alternatives like 'tiktok_ads_get_insights' or other platform-specific performance tools, and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pull_x_ads_performanceBInspect
Get X (Twitter) Ads metrics (1 credit). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions '1 credit' which hints at a cost/usage limitation - useful context not in the schema. However, it doesn't describe what 'metrics' are returned, time granularity, whether this is a read-only operation, authentication requirements, rate limits, or error conditions. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise - two sentences that each serve a purpose. First sentence states the core function and cost implication. Second sentence provides specific parameter guidance. No wasted words, though it could be slightly more structured by mentioning all parameters upfront.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which should document return values), the description doesn't need to explain return values. However, for a 3-parameter tool with 0% schema description coverage and no annotations, the description should do more to explain parameter purposes, usage constraints, and behavioral characteristics. The mention of '1 credit' is helpful but insufficient for full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only mentions 'account_name' parameter and its purpose ('specify which account when multiple are connected'), ignoring 'days' and 'account_id'. The description adds minimal value beyond what's implied by parameter names. With 3 parameters and 0% schema coverage, this is inadequate parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get X (Twitter) Ads metrics' - a specific verb ('Get') and resource ('X Ads metrics'). It distinguishes from some siblings like 'pull_google_ads_performance' by specifying the platform (X/Twitter). However, it doesn't explicitly differentiate from other ad performance tools beyond the platform name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage guidance: 'Use account_name to specify which account when multiple are connected.' This implies when to use the parameter but doesn't offer broader context about when to choose this tool versus alternatives like other ad platform performance tools or general analytics tools. No explicit when-not-to-use guidance or comparison with siblings is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
run_gaql_queryBInspect
Execute a Google Ads Query Language (GAQL) query (2 credits). Use account_name to specify which account when multiple are connected.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| account_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions '2 credits,' which hints at a cost or rate limit, adding some value. However, it doesn't describe other critical behaviors such as whether this is a read-only or mutation operation, error handling, response format, or performance implications, leaving significant gaps for an agent to understand how to use it safely and effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences that are front-loaded and waste no words. The first sentence states the core purpose and includes a key behavioral note (credits), while the second provides specific parameter guidance, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of executing a GAQL query (which involves syntax, account targeting, and potential mutations), the description is minimal. It lacks details on query format, error cases, or output structure. However, the presence of an output schema mitigates some need to explain return values, keeping it at a baseline adequacy but with clear gaps in guidance for proper usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds meaning for 'account_name' by explaining its use 'to specify which account when multiple are connected,' but doesn't cover 'query' (the required parameter) or 'account_id' at all. With 3 parameters and only partial coverage, the description fails to adequately clarify parameter roles beyond what the bare schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Execute') and resource ('Google Ads Query Language (GAQL) query'), making the purpose specific and understandable. However, it doesn't explicitly distinguish this tool from sibling tools like 'execute' or 'ga4_run_report', which might also execute queries but for different systems.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by mentioning 'Use account_name to specify which account when multiple are connected,' which implies when to use the optional parameter. However, it lacks explicit guidance on when to choose this tool over alternatives like 'execute' or other query-related siblings, and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_spend_alertBInspect
Set a weekly ad spend alert — notifies via email, Slack, SMS, and/or WhatsApp when total spend exceeds threshold (free)
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | |||
| phone | No | ||
| notify | No | email,slack,sms,whatsapp | |
| platforms | No | google,meta,reddit | |
| threshold | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions notification channels and that the service is free, but doesn't cover important behavioral aspects: whether this creates a persistent alert or one-time notification, what permissions are required, whether thresholds are per-platform or aggregate, how frequently checks occur, or what the output contains. The description is insufficient for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes essential details about notification channels and cost. Every element earns its place with no redundant information or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a mutation tool with 5 parameters, 0% schema description coverage, no annotations, but with an output schema (which reduces need to describe returns), the description is moderately complete. It covers the core purpose and some parameter context but lacks behavioral details like persistence, permissions, or platform-specific behavior that would be important for proper tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides only parameter names and types without meaning. The description adds some semantic context by mentioning 'weekly ad spend alert' and notification channels, which helps interpret the 'threshold' and 'notify' parameters. However, it doesn't explain the 'platforms' parameter's purpose or format, or clarify whether 'phone' is for SMS/WhatsApp specifically. The description partially compensates but leaves gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Set a weekly ad spend alert'), the resource involved (ad spend monitoring), and distinguishes from siblings by focusing on alert configuration rather than campaign management, audience building, or performance reporting. It specifies notification channels and the free nature of the service.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, whether it's for new alerts only or can modify existing ones, or how it relates to sibling tools like budget optimization or performance monitoring tools. Usage context is implied but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
setup_custom_domainAInspect
Assign a custom domain (e.g. go.acme.com) to a published landing page. Requires Growth plan or higher. Free — no credits charged.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | ||
| domain | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses behavioral traits like plan requirements ('Requires Growth plan or higher') and cost implications ('Free — no credits charged'), which are valuable. However, it lacks details on permissions, rate limits, or what happens if the domain is already assigned, leaving gaps in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with three concise sentences that each add value: stating the purpose, plan requirements, and cost. There is no wasted text, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation operation with no annotations), the description provides some context like plan requirements and cost, but lacks details on parameters, error conditions, or prerequisites (e.g., the landing page must be published). The presence of an output schema helps, but the description should do more to compensate for missing behavioral and parameter information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, with parameters 'slug' and 'domain' only titled without explanations. The description does not add any meaning beyond the schema, failing to clarify what 'slug' refers to (e.g., landing page identifier) or the format for 'domain' (e.g., must be a valid domain name). This is inadequate given the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Assign a custom domain') and the target resource ('to a published landing page'), with an example ('e.g. go.acme.com') for clarity. It effectively distinguishes this tool from siblings like 'publish_landing_page' or 'verify_custom_domain' by focusing on domain assignment rather than creation or verification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context for when to use this tool ('Requires Growth plan or higher') and notes it's 'Free — no credits charged,' which helps in decision-making. However, it does not explicitly state when not to use it or name alternatives (e.g., 'verify_custom_domain' as a sibling), missing full differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
setup_google_ads_trial_funnelAInspect
Set up a complete Google Ads trial acquisition funnel: branded Search campaign (maximize clicks, builds conversion history), PMax campaign (maximize conversions), and Display retargeting campaign — all pointing to a Synter-hosted landing page. Requires an active Google Ads connection. (15 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| trial_days | No | ||
| month_label | No | ||
| business_name | Yes | ||
| landing_page_url | Yes | ||
| total_daily_budget | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it creates multiple campaigns, points them to a Synter-hosted landing page, and mentions a credit cost ('15 credits'). However, it lacks details on permissions needed, rate limits, error handling, or what 'complete' entails operationally, leaving gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently lists campaign types and requirements in a single, dense sentence. However, the parenthetical credit note could be integrated more smoothly, and some redundancy exists (e.g., 'Synter-hosted' might be implied).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (multi-campaign setup, 5 parameters, no annotations) and the presence of an output schema, the description is moderately complete. It covers the high-level goal and prerequisites but lacks details on parameter roles, campaign configurations, or success criteria, leaving the agent to rely heavily on the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'landing page' and 'business' context but does not explain any of the 5 parameters (e.g., what 'trial_days' or 'total_daily_budget' control). This adds minimal value beyond the schema's titles, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Set up a complete Google Ads trial acquisition funnel') and details the exact components (branded Search, PMax, and Display retargeting campaigns). It distinguishes itself from siblings by focusing on a comprehensive multi-campaign setup for trial acquisition, unlike more granular campaign tools in the list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use it ('Set up a complete Google Ads trial acquisition funnel') and includes a prerequisite ('Requires an active Google Ads connection'). However, it does not specify when not to use it or name alternative tools for partial setups, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
similarweb_analyze_domainCInspect
SimilarWeb traffic overview for a single domain — visits, engagement, traffic sources, top countries (200 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| months | No | ||
| top_geos_limit | No | ||
| include_geography | No | ||
| include_similar_sites | No | ||
| include_traffic_sources | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions '200 credits' which indicates a cost/rate limit aspect, which is valuable context. However, it doesn't describe other important behavioral traits like whether this is a read-only operation, what permissions might be needed, error conditions, or response format. For a tool with no annotations, this leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that efficiently communicates the core purpose and cost implication. Every word earns its place: 'SimilarWeb traffic overview' establishes the service and function, 'for a single domain' specifies scope, '— visits, engagement, traffic sources, top countries' enumerates key outputs, and '(200 credits)' adds important cost context. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which should document return values), the description doesn't need to explain outputs. However, for a tool with 6 parameters (0% schema coverage), no annotations, and complex functionality (traffic analysis), the description is incomplete. It covers the basic purpose and cost but misses parameter explanations, behavioral context, and usage guidance. The presence of an output schema raises the baseline, but significant gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, meaning none of the 6 parameters have descriptions in the schema. The tool description doesn't mention any parameters at all - it doesn't explain what 'domain' should be formatted as, what 'months' represents, or what the boolean flags control. With 0% schema coverage and 6 parameters, the description fails to compensate for the lack of parameter documentation, leaving all parameters semantically unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'SimilarWeb traffic overview for a single domain — visits, engagement, traffic sources, top countries'. It specifies the verb ('analyze' implied by 'overview') and resource ('domain'), and distinguishes it from sibling tools like 'similarweb_compare_domains' by focusing on a single domain. However, it doesn't explicitly differentiate from all siblings, so it's not a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions '200 credits' which hints at cost implications, but doesn't specify when this tool is appropriate compared to other SimilarWeb tools (e.g., 'similarweb_compare_domains' or 'similarweb_keyword_analysis') or other analytics tools in the server. No explicit when/when-not statements or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
similarweb_compare_domainsAInspect
Side-by-side SimilarWeb comparison for 2-5 domains — visits, engagement, traffic sources (350 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| months | No | ||
| domains | Yes | ||
| include_traffic_sources | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions cost ('350 credits'), which is useful context, but lacks details on rate limits, authentication needs, error handling, or what the comparison output entails. For a tool with no annotations, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes essential details like domain range and cost. Every word earns its place with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no annotations, but with an output schema), the description covers the basic purpose and cost. Since an output schema exists, it need not explain return values, but it could better address parameter usage and behavioral aspects to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description adds some meaning by implying 'domains' parameter usage (2-5 domains) and hinting at 'include_traffic_sources' through 'traffic sources' in the metrics. However, it does not explain 'months' or provide full parameter semantics, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Side-by-side SimilarWeb comparison'), resource ('2-5 domains'), and key metrics ('visits, engagement, traffic sources'), distinguishing it from sibling tools like 'similarweb_analyze_domain' which likely analyzes a single domain. It provides a complete picture of what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly suggests usage for comparing multiple domains (2-5) and mentions cost ('350 credits'), providing some context. However, it does not explicitly state when to use this tool versus alternatives like 'similarweb_analyze_domain' or other analysis tools, nor does it outline exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
similarweb_keyword_analysisCInspect
SimilarWeb keyword intelligence — top organic/paid keywords for a domain, or per-keyword search volume/CPC (250 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | site_keywords | |
| paid | No | ||
| limit | No | ||
| domain | No | ||
| months | No | ||
| keywords | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions '250 credits' (implying a cost/rate limit), which is useful, but doesn't cover other critical aspects like required permissions, whether it's read-only or mutative, error handling, or response format. For a tool with 6 parameters and no annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. However, it could be more structured by separating the two main use cases (domain keywords vs. per-keyword analysis) for better clarity. No wasted words, but slightly dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, dual functionality), no annotations, and an output schema (which alleviates need to describe returns), the description is minimally adequate. It covers the high-level purpose and cost, but misses critical context like parameter interactions, error conditions, and usage boundaries relative to siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only vaguely references 'domain' and 'keywords' without explaining their roles, relationships, or how they interact with the 'mode' parameter. The description fails to clarify the tool's dual functionality (site keywords vs. per-keyword analysis), leaving parameter semantics largely unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'SimilarWeb keyword intelligence — top organic/paid keywords for a domain, or per-keyword search volume/CPC'. It specifies the verb ('keyword intelligence') and resources (domain keywords or per-keyword metrics), but doesn't explicitly differentiate from sibling tools like 'similarweb_analyze_domain' or 'similarweb_compare_domains', which likely serve different analytical functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions '250 credits' as a cost implication, but doesn't specify scenarios, prerequisites, or exclusions. Given multiple SimilarWeb-related sibling tools, this lack of differentiation leaves the agent without clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sync_audienceBInspect
Upload audience data (emails, companies) to ad platforms. Supports batch mode for Clay.com row-by-row workflows (staging is free, upload costs 10 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| emails | No | ||
| platform | Yes | ||
| batch_key | No | ||
| account_id | No | ||
| batch_action | No | ||
| audience_name | No | ||
| audience_type | No | ||
| company_names | No | ||
| company_domains | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses cost implications ('upload costs 10 credits') and staging behavior ('staging is free'), which are valuable behavioral traits. However, it lacks details on permissions required, rate limits, error handling, or what the upload entails (e.g., overwrite vs. append). The mention of costs adds context but doesn't fully compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first states the core purpose, and the second adds contextual details about batch mode and costs. It's front-loaded with the main action. There's no wasted text, though it could be slightly more structured (e.g., separating cost info).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, no annotations, but has output schema), the description is moderately complete. It covers the tool's purpose and some behavioral aspects (costs, staging), but lacks guidance on parameter usage, error conditions, or integration specifics. The output schema existence means return values are documented elsewhere, but the description doesn't address the mutation nature or potential side effects adequately for a tool with many parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'emails, companies' and 'batch mode', which loosely map to parameters like 'emails', 'company_names', and 'batch_key', but doesn't explain the purpose or format of critical parameters like 'platform', 'account_id', 'audience_name', or 'audience_type'. With 9 parameters and no schema descriptions, the description adds minimal semantic value beyond hinting at data types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Upload') and resource ('audience data to ad platforms'), specifying what data types (emails, companies) are involved. It distinguishes from siblings like 'list_audiences' or 'build_lookalike_audience' by focusing on data upload rather than listing or creating derived audiences. However, it doesn't explicitly differentiate from other upload-related tools that might exist in broader contexts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'batch mode for Clay.com row-by-row workflows', which suggests when batch processing is appropriate. It doesn't provide explicit alternatives or exclusions (e.g., when to use vs. 'list_audiences' or other sibling tools), nor does it mention prerequisites like authentication or platform compatibility beyond the implied Clay.com integration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
synter_onboarding_startAInspect
Start onboarding - create account and get API key (no auth required)
| Name | Required | Description | Default |
|---|---|---|---|
| Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It usefully states 'no auth required' which is important context for authentication needs. However, it doesn't describe what 'create account' entails (what data is stored, confirmation process), what the API key format is, rate limits, or error conditions. The description adds some value but leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (8 words) and front-loaded with the core action. Every word earns its place: 'Start onboarding' defines the action, 'create account and get API key' specifies outcomes, and '(no auth required)' provides crucial context. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (account creation with API key generation), no annotations, and the presence of an output schema, the description is reasonably complete. The output schema existence means return values don't need explanation in the description. However, for an account creation tool with security implications, more detail about the creation process and API key handling would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description mentions 'email' implicitly through context but doesn't explicitly explain the email parameter's purpose, format requirements, or validation rules. Since there's only one parameter, the baseline is 4, but the description doesn't fully compensate for the lack of schema documentation, warranting a 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Start onboarding', 'create account', 'get API key') and identifies the resource (account/API key). It distinguishes from siblings by focusing on onboarding initiation rather than campaign management or analytics. However, it doesn't explicitly differentiate from 'synter_onboarding_status' which appears to be a related sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Start onboarding') and includes an important constraint ('no auth required'), which helps the agent understand this is an initial setup tool. However, it doesn't explicitly state when NOT to use it or mention alternatives like 'synter_onboarding_status' for checking onboarding status.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
synter_onboarding_statusBInspect
Check onboarding progress - poll until ready (no auth required)
| Name | Required | Description | Default |
|---|---|---|---|
| session_token | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses two important behavioral traits: the polling nature ('poll until ready') and authentication requirements ('no auth required'). However, it doesn't describe rate limits, what 'ready' state entails, error conditions, or response format. The polling guidance is valuable but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (7 words) and front-loaded with the core purpose. Every word earns its place: 'Check onboarding progress' establishes purpose, 'poll until ready' provides usage guidance, and '(no auth required)' adds important behavioral context. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter with 0% schema coverage and no annotations, the description is incomplete. It provides good high-level guidance about polling behavior and authentication, but lacks details about the session_token parameter, what constitutes 'ready' state, error handling, and polling intervals. The existence of an output schema helps, but the description should do more to compensate for the poor schema documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'no auth required' which relates to the session_token parameter, but doesn't explain what a session_token is, how to obtain it, its format, or its relationship to onboarding. The single required parameter remains largely undocumented beyond the schema's basic type information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check onboarding progress' specifies the verb (check) and resource (onboarding progress). It distinguishes from siblings like 'synter_onboarding_start' which initiates onboarding rather than checking status. However, it doesn't specify what 'ready' means or what system's onboarding is being checked.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context with 'poll until ready', suggesting this tool should be used repeatedly during onboarding monitoring. It distinguishes from 'synter_onboarding_start' as a follow-up tool. However, it doesn't explicitly state when to stop polling, what alternatives exist for checking status, or any prerequisites beyond the session_token parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test_creativesCInspect
Start or check a multi-armed bandit creative experiment using Thompson sampling (3 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| action | No | status | |
| platform | Yes | ||
| campaign_id | Yes | ||
| reward_metric | No | ctr |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions '3 credits' as a cost, which is useful context, but doesn't describe other behavioral traits such as required permissions, whether it's read-only or destructive, rate limits, or what happens when starting vs. checking an experiment. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with a single sentence that front-loads the core purpose ('Start or check a multi-armed bandit creative experiment') and adds key details ('using Thompson sampling (3 credits)'). There's no wasted text, though it could be slightly more structured by separating purpose from cost.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (experiment tool with 4 parameters), no annotations, 0% schema coverage, but with an output schema present, the description is incomplete. It covers the basic purpose and cost but lacks parameter explanations, behavioral context, and usage guidelines. The output schema may help with return values, but the description doesn't provide enough context for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no meaning beyond what the input schema provides. With 0% schema description coverage (no parameter descriptions in the schema) and 4 parameters (action, platform, campaign_id, reward_metric), the description doesn't explain what these parameters do, their formats, or how they relate to starting/checking experiments. This fails to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Start or check a multi-armed bandit creative experiment using Thompson sampling.' It specifies the action (start/check), the method (Thompson sampling), and the resource (creative experiment). However, it doesn't explicitly differentiate from sibling tools, which include various campaign-related functions but no other experiment tools, so the distinction is implicit rather than explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions '3 credits' as a cost, but doesn't explain prerequisites, timing, or how it relates to sibling tools like 'create_campaign_for_audience' or 'optimize_budget'. Without explicit when/when-not instructions, the agent must infer usage from the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_ads_get_adgroupBInspect
Get full TikTok ad group configuration (1 credit). Returns targeting, budget, bid, pixel, optimization goal, languages, locations, age groups, identity.
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | No | ||
| adgroup_id | Yes | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the operation is a read ('Get'), mentions a cost implication ('1 credit'), and lists the return fields, which helps anticipate behavior. However, it lacks details on error conditions, rate limits, authentication needs, or whether it's idempotent—significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and key details. The parenthetical credit note is integrated smoothly. No wasted words, though it could be slightly more structured (e.g., separating cost from return details).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which covers return values), the description's listing of return fields is redundant but not harmful. However, with no annotations, 3 parameters (1 required), and 0% schema coverage, the description should do more to explain inputs and behavioral context. It's minimally adequate but leaves gaps in usage and parameter understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds no information about the parameters (adgroup_id, account_id, account_name)—not explaining their purpose, format, or relationships. The mention of 'Returns targeting, budget...' hints at output but doesn't clarify inputs. This fails to address the schema's documentation gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'TikTok ad group configuration' with specific details about what's returned (targeting, budget, etc.). It distinguishes from siblings like 'tiktok_ads_list_adgroups' (list vs. get details) and 'tiktok_ads_update_adgroup' (read vs. write), though not explicitly named. The credit cost mention is additional context but doesn't detract from the core purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description implies it's for retrieving detailed configuration of a specific ad group, but doesn't mention when to choose this over 'tiktok_ads_list_adgroups' for overviews or 'tiktok_ads_get_insights' for performance data. The credit cost hint suggests resource considerations but no clear usage rules.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_ads_get_campaignBInspect
Get full TikTok campaign configuration by ID (1 credit). Returns name, objective, budget, status, timestamps.
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | No | ||
| campaign_id | Yes | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the credit cost ('1 credit'), which is valuable behavioral context not in the schema. It also describes the return content ('Returns name, objective, budget, status, timestamps'), giving the agent expectations about output. However, it doesn't mention error conditions, rate limits, authentication requirements, or whether this is a read-only operation (though 'Get' implies it).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise: one sentence stating the action and resource, followed by a second sentence detailing the return values. Every word earns its place with no redundancy. The structure is front-loaded with the core purpose first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which should document return values), the description doesn't need to fully explain outputs. However, with no annotations and 0% schema description coverage for 3 parameters, the description should do more to compensate. It provides good purpose and some behavioral context (credit cost) but leaves parameter semantics largely unexplained. For a tool with 3 parameters (one required, two optional), this creates significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description only mentions 'by ID', which corresponds to the 'campaign_id' parameter. It doesn't explain the optional 'account_id' and 'account_name' parameters at all, leaving their purpose and relationship undocumented. The description adds minimal value beyond what's implied by parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get full TikTok campaign configuration by ID' with specific verb ('Get') and resource ('TikTok campaign configuration'). It distinguishes from siblings like 'list_campaigns' (which lists multiple) and 'tiktok_ads_get_insights' (which gets performance data). However, it doesn't explicitly contrast with 'tiktok_ads_get_adgroup' which has a similar structure but different resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying 'by ID' and mentioning the required 'campaign_id' parameter, suggesting this is for retrieving details of a specific known campaign. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'list_campaigns' (for browsing) or 'tiktok_ads_get_insights' (for performance metrics). The credit cost mention ('1 credit') provides some usage context but not comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_ads_get_insightsBInspect
Get TikTok Ads performance report with video metrics (1 credit). Returns spend, impressions, clicks, CTR, conversions, CPA, ROAS, and TikTok-specific video engagement metrics per campaign/adgroup/ad.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| account_id | No | ||
| data_level | No | AUCTION_CAMPAIGN | |
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does mention the credit cost (1 credit), which is useful operational context. It also lists the return metrics (spend, impressions, etc.), giving insight into output behavior. However, it doesn't cover important aspects like rate limits, authentication requirements, error conditions, or whether this is a read-only vs. mutating operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise - a single sentence that packs in the core purpose, credit cost, and return metrics. It's front-loaded with the main action. However, it could be slightly more structured by separating the credit cost into its own note for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (which should document return values), the description doesn't need to explain the response format in detail. However, with 4 parameters at 0% schema coverage and no annotations, the description should do more to explain parameter usage and behavioral constraints. The credit cost mention helps, but more context about the tool's operation would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, meaning none of the 4 parameters have descriptions in the schema. The tool description provides no information about any parameters - it doesn't explain what 'days', 'account_id', 'data_level', or 'account_name' mean or how they affect the report. This leaves significant gaps in understanding how to use the tool effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get TikTok Ads performance report with video metrics.' It specifies the resource (TikTok Ads) and verb (get insights/report), and mentions the inclusion of TikTok-specific video engagement metrics. However, it doesn't explicitly differentiate from sibling tools like 'pull_tiktok_ads_performance' or 'tiktok_ads_get_adgroup/campaign', leaving some ambiguity about when to choose this tool over those alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for usage, or compare it to sibling tools like 'pull_tiktok_ads_performance' or other TikTok-specific tools. The only implicit usage hint is the credit cost, but no explicit when/when-not instructions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_ads_list_adgroupsBInspect
List all TikTok ad groups, optionally filtered by campaign (1 credit). Returns ID, name, status, budget, bid, optimization goal.
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | No | ||
| campaign_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the credit cost (1 credit) and return fields (ID, name, status, budget, bid, optimization goal), which adds useful operational and output context. However, it doesn't cover other behavioral aspects like pagination, rate limits, authentication needs, or error handling, leaving gaps for a tool that likely interacts with an external API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the core action, optional filtering, cost, and return fields. It is front-loaded with the main purpose and avoids unnecessary words, making it easy to parse quickly without sacrificing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (listing ad groups with filtering), no annotations, 0% schema coverage, but with an output schema present, the description is partially complete. It covers the purpose, cost, and return fields, but lacks details on parameter usage, behavioral traits like pagination or errors, and differentiation from sibling tools. The output schema likely handles return value documentation, but other gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, with three parameters (account_id, campaign_id, account_name) that are only named without explanation in the schema. The description mentions optional filtering by campaign, which hints at the 'campaign_id' parameter, but doesn't explain the purpose or usage of 'account_id' or 'account_name', nor does it clarify if these are required or how they interact. This leaves significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List all TikTok ad groups') and resource ('TikTok ad groups'), making the purpose immediately understandable. It specifies optional filtering by campaign, which adds useful detail. However, it doesn't distinguish this tool from sibling TikTok tools like 'tiktok_ads_list_ads' or 'tiktok_ads_get_adgroup' in terms of scope or use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing ad groups with optional campaign filtering, but provides no explicit guidance on when to use this tool versus alternatives like 'tiktok_ads_get_adgroup' (for single ad group details) or 'tiktok_ads_list_ads' (for ads within ad groups). It mentions the credit cost (1 credit), which offers some operational context, but lacks clear when/when-not scenarios or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_ads_list_adsBInspect
List all TikTok ads, optionally filtered by ad group (1 credit). Returns ad_id, ad_name, adgroup_id, campaign_id, status, ad_format.
| Name | Required | Description | Default |
|---|---|---|---|
| account_id | No | ||
| adgroup_id | No | ||
| account_name | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses cost (1 credit) and return fields, which is helpful. However, it lacks important behavioral details like pagination, rate limits, authentication requirements, error conditions, or whether this is a read-only operation (though 'List' implies it).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence that front-loads the core purpose ('List all TikTok ads'), adds filtering context, cost information, and return fields. Every element earns its place with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a listing tool with 3 parameters (0% schema coverage) and no annotations, the description is incomplete. While it mentions return fields (helpful since output schema exists) and cost, it lacks parameter explanations, behavioral constraints, and differentiation from sibling tools. The existence of an output schema reduces but doesn't eliminate completeness needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'optionally filtered by ad group' which hints at adgroup_id parameter, but doesn't explain account_id or account_name parameters at all. No guidance on parameter relationships, format, or how filtering works across multiple parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('TikTok ads'), and specifies optional filtering by ad group. It distinguishes from siblings like 'tiktok_ads_get_adgroup' by focusing on listing multiple ads rather than retrieving a single one. However, it doesn't explicitly differentiate from 'tiktok_ads_list_adgroups' which lists ad groups rather than ads.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'optionally filtered by ad group' and mentions credit cost (1 credit), but provides no explicit guidance on when to use this tool versus alternatives like 'tiktok_ads_get_insights' or 'pull_tiktok_ads_performance'. No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tiktok_ads_update_adgroupCInspect
Update a TikTok ad group: status, budget, locations, age targeting, bid, languages, audiences, optimization goal, pixel tracking (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| bid | No | ||
| budget | No | ||
| status | No | ||
| age_max | No | ||
| age_min | No | ||
| bid_type | No | ||
| pixel_id | No | ||
| languages | No | ||
| locations | No | ||
| account_id | No | ||
| adgroup_id | Yes | ||
| account_name | No | ||
| audience_ids | No | ||
| conversion_event | No | ||
| optimization_goal | No | ||
| excluded_audience_ids | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions a credit cost ('5 credits'), which is useful operational context, but lacks critical details: it doesn't clarify that this is a mutation tool (implied by 'Update' but not explicit), specify required permissions, describe error handling, or explain what happens to unspecified fields (partial vs. full updates). For a 16-parameter mutation tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently packed sentence that front-loads the core action and key parameters. The credit cost is appended concisely. There's no wasted verbiage, though it could be slightly more structured (e.g., separating functional and operational details).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (16 parameters, mutation operation), no annotations, and 0% schema coverage, the description is inadequate. It lacks behavioral context (permissions, side effects), doesn't fully explain parameters, and though an output schema exists, the description doesn't hint at response structure or success/failure indicators. For a tool of this scope, more comprehensive guidance is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists 10 parameter examples (status, budget, locations, age targeting, bid, languages, audiences, optimization goal, pixel tracking), which helps interpret some of the 16 parameters. However, it doesn't cover all parameters (e.g., account_id, bid_type, conversion_event), provide format details (e.g., string formats for locations), or explain relationships between parameters (e.g., age_min/age_max). The value added is partial but incomplete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update a TikTok ad group') and lists specific fields that can be modified (status, budget, locations, etc.), providing a concrete understanding of what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'tiktok_ads_get_adgroup' or 'update_campaign_budget', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing ad group ID), compare it to similar tools like 'update_campaign_budget', or indicate scenarios where it's appropriate versus not. The credit cost mention is operational but doesn't inform usage decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_campaign_budgetCInspect
Update a campaign's daily budget (5 credits)
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | ||
| account_id | No | ||
| campaign_id | Yes | ||
| account_name | No | ||
| daily_budget | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions a cost ('5 credits'), which adds some context about resource usage, but fails to describe critical traits such as required permissions, whether the update is reversible, rate limits, or error handling. For a mutation tool, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—a single sentence that directly states the tool's purpose and includes cost information. It's front-loaded with the core action and resource, with no wasted words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a budget update tool with 5 parameters, no annotations, and 0% schema coverage, the description is incomplete. While an output schema exists (which reduces the need to explain return values), the lack of parameter explanations, behavioral details, and usage guidelines makes it insufficient for safe and effective use by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no meaning beyond what the input schema provides. With 0% schema description coverage, none of the 5 parameters (platform, account_id, campaign_id, account_name, daily_budget) are explained in the schema, and the description doesn't compensate by clarifying their purposes, formats, or relationships. This is inadequate for a tool with multiple parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('a campaign's daily budget'), making the purpose specific and understandable. However, it doesn't differentiate this tool from potential siblings like 'optimize_budget' or 'pause_campaign', which might also involve budget adjustments, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'optimize_budget' or 'list_campaigns' for checking current budgets. It mentions a cost ('5 credits'), which hints at resource usage but doesn't specify prerequisites, conditions, or exclusions for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_landing_page_htmlAInspect
Update the HTML content of an existing landing page without AI regeneration (free). Use this to patch copy, add logos, or tweak layout after reviewing a draft.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | ||
| html_content | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: this is a mutation operation (update), it doesn't use AI regeneration, and it's free. However, it doesn't mention permission requirements, whether changes are reversible, rate limits, or what happens to existing HTML not mentioned in the update. The description adds useful context but leaves gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded. The first sentence establishes the core purpose, and the second sentence provides usage examples without any wasted words. Every sentence earns its place by adding specific value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a mutation tool with no annotations, 2 parameters, and an output schema exists (which handles return values), the description does well. It covers purpose, usage context, and key behavioral aspects (no AI, free). The main gap is lack of parameter explanations, but with only 2 parameters and an output schema, this is somewhat mitigated. For a mutation tool, it could benefit from more safety/constraint information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description doesn't explicitly explain what 'slug' or 'html_content' parameters mean or their formats. However, for a tool with only 2 parameters, the description implies their purpose through context ('update... landing page' suggests slug identifies the page, 'HTML content' suggests the new content). This provides some semantic value but doesn't fully compensate for the 0% coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Update the HTML content'), resource ('an existing landing page'), and method ('without AI regeneration'). It distinguishes from sibling tools like 'create_landing_page' by specifying it's for updates, not creation, and from 'publish_landing_page' by focusing on content modification rather than publishing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'to patch copy, add logos, or tweak layout after reviewing a draft.' It implies this is for post-draft modifications and mentions it's 'free' (no AI regeneration cost). However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upsert_plan_entityCInspect
Add or update an entity (campaign, ad, tweet, go_link, etc.) within a campaign plan (2 credits).
| Name | Required | Description | Default |
|---|---|---|---|
| plan_id | Yes | ||
| platform | Yes | ||
| remote_id | No | ||
| entity_type | Yes | ||
| logical_key | Yes | ||
| desired_state | No | paused | |
| metadata_json | No | ||
| remote_ref_json | No | ||
| parent_logical_key | No | ||
| provider_account_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the credit cost (2 credits), which is useful behavioral context about resource consumption. However, it doesn't disclose other critical traits: whether this is a read/write operation (implied by 'add or update' but not explicit), what happens on conflicts, if changes are reversible, or authentication needs. For a mutation tool with 10 parameters and no annotations, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise—single sentence with no wasted words. Front-loaded with the core action ('Add or update an entity'), followed by scope and credit cost. Every element (entity examples, plan context, credits) adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, mutation operation, no annotations) and the presence of an output schema (which might cover return values), the description is incomplete. It lacks parameter explanations, behavioral details beyond credits, and usage context. For a tool that likely modifies campaign plans, this leaves too many gaps for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description provides no information about any of the 10 parameters—it doesn't explain what 'plan_id', 'logical_key', 'platform', etc., mean or how they interact. With high parameter count and zero schema coverage, the description fails to compensate, leaving parameters semantically opaque.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add or update') and the resource ('an entity within a campaign plan'), with examples of entity types (campaign, ad, tweet, etc.). It doesn't explicitly differentiate from siblings like 'create_campaign_plan' or 'update_campaign_budget', but the focus on entities within plans provides reasonable distinction. The credit cost mention is additional context but not core to purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'create_campaign_plan' or 'update_campaign_budget'. The description mentions it's for entities within a campaign plan, but doesn't specify prerequisites (e.g., plan must exist) or compare to other entity-management tools. The credit cost (2 credits) hints at resource usage but not decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_custom_domainCInspect
Check if DNS is configured for a landing page's custom domain. Free — no credits charged.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool is 'Free — no credits charged,' which adds useful context about cost, but it doesn't describe other critical behaviors: what the check entails (e.g., DNS record validation), response format, error handling, or rate limits. For a verification tool with zero annotation coverage, this leaves significant gaps in understanding its operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence clearly states the tool's purpose, and the second adds valuable cost information. There's no wasted language, and both sentences earn their place by providing essential context efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (verification operation with 1 parameter) and the presence of an output schema (which handles return values), the description is partially complete. It covers the purpose and cost aspect but lacks details on parameter semantics, usage context, and behavioral traits beyond cost. With no annotations and incomplete parameter info, it's adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter ('slug') with 0% description coverage, meaning the schema provides no details about this parameter. The description adds no semantic information about 'slug'—it doesn't explain what a slug is, its format, or how it relates to the custom domain. With low schema coverage, the description fails to compensate, leaving the parameter's meaning unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check if DNS is configured for a landing page's custom domain.' It specifies the action ('Check'), resource ('DNS'), and scope ('landing page's custom domain'). However, it doesn't explicitly differentiate from sibling tools like 'setup_custom_domain', which might be a related setup operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Free — no credits charged,' which hints at cost implications but doesn't specify prerequisites (e.g., after setting up a custom domain) or compare it to other verification tools. Without explicit when/when-not instructions, it's insufficient for optimal tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_pixel_ownershipAInspect
Cross-validate that the pixel/tag IDs on a landing page belong to the connected ad account. Detects wrong-pixel installs, domain registration gaps, and CAPI mismatches. Free — no credits.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| platform | Yes | ||
| account_id | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is 'Free — no credits', which adds useful context about cost, but does not cover other behavioral aspects like required permissions, rate limits, response format, or whether it performs read-only versus write operations. The description doesn't contradict annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose and specific detection capabilities, and the second adds cost information. Every phrase adds value without redundancy, making it appropriately sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (verification with potential mismatches), no annotations, and an output schema (which reduces need to describe returns), the description is partially complete. It covers purpose and cost well but lacks parameter explanations and behavioral details like error handling or prerequisites, leaving gaps for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. However, it provides no information about the three parameters (url, platform, account_id), their meanings, formats, or examples. The description focuses on the tool's purpose but leaves parameters entirely unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('cross-validate', 'detects') and resources ('pixel/tag IDs', 'landing page', 'connected ad account'), and distinguishes it from siblings by focusing on ownership verification rather than creation, optimization, or analysis tasks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning specific scenarios ('wrong-pixel installs', 'domain registration gaps', 'CAPI mismatches'), but does not explicitly state when to use this tool versus alternatives or provide exclusions. The sibling tools list shows no direct alternatives for verification, making explicit comparison less critical.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.