Skip to main content
Glama

Server Details

Meta Ads MCP (Facebook + Instagram) - analyze performance, manage budgets, pause campaigns.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
nowork-studio/toprank
GitHub Stars
499

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 27 of 27 tools scored. Lowest: 3.6/5.

Server CoherenceA
Disambiguation4/5

Most tools have clearly distinct purposes (e.g., createCampaign vs createAdSet vs createAd). The main source of ambiguity is the 'fileInternalNotFairToolFeedback' tool, which is unrelated to ad management and could confuse agents. Also, 'updateAdSet' and 'updateAdSetBudget' have overlapping scope, though the latter is a subset.

Naming Consistency3/5

The majority of tools follow a verb_noun camelCase pattern (e.g., createCampaign, pauseAd). However, 'fileInternalNotFairToolFeedback' and 'runScript' deviate significantly, and 'getPagePostInsights' is inconsistently named compared to 'getInsights'. Overall, the pattern is recognizable but not perfectly consistent.

Tool Count3/5

27 tools is on the higher end for an ad management server. The set includes many granular tools (separate enable/pause for each entity) plus a powerful 'runScript' that makes some individual read tools redundant. The internal feedback tool also inflates the count without contributing to the domain.

Completeness4/5

The tool set covers the full lifecycle for campaigns, ad sets, ads, and creatives (create, read, update, pause/enable, rename). Insights, account info, and page listing are also included. The main gap is the lack of delete/archive tools, which is a notable omission for completeness.

Available Tools

27 tools
createAdA
Idempotent
Inspect

Create a new ad inside an existing ad set, attaching an existing creative by id. Defaults to status=PAUSED. Call createAdCreative first to get a creative_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
statusNoPAUSED
adset_idYes
accountIdNoAccount ID (omit for primary)
creative_idYesId of an existing ad creative.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the default status (PAUSED) and the requirement to attach an existing creative, adding behavioral context beyond the annotations. The annotations (idempotentHint=true, destructiveHint=false, readOnlyHint=false) are consistent, and there is no contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences that front-load the purpose and then add a crucial usage hint. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with 5 parameters and no output schema, the description covers the essential flow (prerequisite, mandatory IDs, default status). It could mention potential errors (e.g., if creative or ad set doesn't exist) but is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 40%, but the description adds meaning for creative_id ('attaching an existing creative by id') and notes the default for status. However, other parameters like name and adset_id lack explanation beyond the schema's basic constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Create'), the resource ('a new ad inside an existing ad set'), and the required linkage to an existing creative. It also mentions the default status, making the purpose precise and distinguishing it from sibling tools like createAdCreative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells the agent to call createAdCreative first to obtain a creative_id, establishing a clear prerequisite. However, it does not provide guidance on when to avoid this tool (e.g., if the ad set does not exist) or compare it to other ad-related tools like updateAd.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createAdCreativeA
Idempotent
Inspect

Create an ad creative on the ad account. Pass object_story_spec as a JSON object with page_id plus one of link_data / photo_data / video_data / template_data. Returns the new creative id, which is then used in createAd's creative_id. Use listPages to get a valid page_id for object_story_spec.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
accountIdNoAccount ID (omit for primary)
object_story_specYes{ page_id: string, link_data?: {...}, photo_data?: {...}, video_data?: {...} }. page_id is required.
degrees_of_freedom_specNoOptional Advantage+ creative degrees-of-freedom spec for AI-driven creative variation.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description says 'Create' and 'Returns the new creative id,' implying a non-idempotent write operation, contradicting the annotation idempotentHint=true. Annotations already indicate readOnlyHint=false and destructiveHint=false, but the description adds no detail on permissions or side effects. The contradiction undermines transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each essential and front-loaded. The first sentence states the core purpose, the second details the key parameter, and the third provides a cross-reference. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, nested objects, no output schema), the description covers the main flow and key parameter but omits explanation of degrees_of_freedom_spec and does not clarify the idempotency contradiction. It is adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds value beyond the input schema by specifying the required structure for object_story_spec (one of four data types) and recommending listPages for page_id. With 75% schema coverage, this reduces ambiguity, especially for the nested object parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create an ad creative on the ad account,' specifying the verb and resource. It distinguishes from siblings like updateAdCreative by focusing on creation. It also mentions the returned id's usage in createAd, reinforcing its role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on constructing object_story_spec (page_id plus one of link_data/photo_data/video_data/template_data) and directs to use listPages for a valid page_id. It explains the tool's output is used in createAd. It does not explicitly state when not to use or alternatives, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createAdSetA
Idempotent
Inspect

Create a new ad set under an existing campaign. Targeting is a JSON spec (geo_locations, age_min, age_max, genders, interests, etc.). Either set a budget here or rely on the parent campaign's CBO. Defaults to status=PAUSED.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
statusNoPAUSED
end_timeNoISO 8601.
accountIdNoAccount ID (omit for primary)
targetingYesMeta targeting spec. Minimum: { geo_locations: { countries: ["US"] } }. Add age_min, age_max, genders, interests, custom_audiences, behaviors, locales, publisher_platforms etc. as needed.
bid_amountNoBid cap or cost cap in account-currency MINOR units.
start_timeNoISO 8601.
campaign_idYes
bid_strategyNo
daily_budgetNo
billing_eventYesIMPRESSIONS | LINK_CLICKS | THRUPLAY | PURCHASE | etc. Determines how Meta charges.
lifetime_budgetNo
promoted_objectNoRequired for some objectives (e.g. { page_id, application_id, pixel_id, custom_event_type }). Pass as JSON object.
optimization_goalYesREACH | IMPRESSIONS | LINK_CLICKS | LANDING_PAGE_VIEWS | OFFSITE_CONVERSIONS | THRUPLAY | etc.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate non-read-only, non-destructive, and idempotent. Description adds useful details like targeting JSON spec, budget options, and default paused status, but does not specify return values or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with main action, each sentence adds unique value. Could be slightly more structured but is efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers creation, targeting, budget, and default status. Missing details on return value (no output schema) and preconditions beyond campaign ID. Could mention required fields from schema for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has descriptions for most parameters (57% coverage). Description adds clarity on targeting structure and budget choices but does not cover all parameters equally, missing guidance on name, campaign_id, etc.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Create' and resource 'ad set' with context 'under an existing campaign'. Distinguishes from sibling tools like createAd and createCampaign by specifying the target resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage when you need a new ad set under a campaign and mentions budget options and default status. However, lacks explicit when-not-to-use or alternatives like updateAdSet for modifications.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

createCampaignA
Idempotent
Inspect

Create a new campaign on the active (or specified) ad account. Returns the new campaign id and a snapshot of its fields. Defaults to status=PAUSED so the user can review before launching. Budgets are in account-currency MINOR units (cents for USD). special_ad_categories is required by Meta — pass ["NONE"] for a standard commercial ad, or one of EMPLOYMENT, HOUSING, CREDIT, ISSUES_ELECTIONS_POLITICS, ONLINE_GAMBLING_AND_GAMING, FINANCIAL_PRODUCTS_SERVICES for restricted categories.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
statusNoPAUSED
accountIdNoAccount ID (omit for primary)
objectiveYesCampaign objective. Common values: OUTCOME_TRAFFIC, OUTCOME_AWARENESS, OUTCOME_ENGAGEMENT, OUTCOME_LEADS, OUTCOME_SALES, OUTCOME_APP_PROMOTION.
stop_timeNoISO 8601 stop time (campaign-level CBO only).
start_timeNoISO 8601 start time (campaign-level CBO only).
bid_strategyNoLOWEST_COST_WITHOUT_CAP | LOWEST_COST_WITH_BID_CAP | COST_CAP | LOWEST_COST_WITH_MIN_ROAS. Required for some objectives when using Campaign Budget Optimization.
daily_budgetNo
lifetime_budgetNo
special_ad_categoriesNoRequired by Meta. Use ["NONE"] for standard ads.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses default behavior (PAUSED status), budget units in minor units, and special_ad_categories necessity. Adds context beyond annotations (readOnlyHint=false, destructiveHint=false, idempotentHint=true). No contradiction with annotations, though idempotentHint may be inconsistent with creating a new campaign each call.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences plus targeted detail. Front-loaded with purpose and return value, followed by key behavioral notes. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers creation, return format, defaults, and important parameter notes. Lacks explicit mention of required fields (name, objective) but schema provides that. Adequate for a create tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds significant meaning beyond schema: explains budget units (cents for USD), enumerates special_ad_categories options, and clarifies status default. Schema covers 60% of parameters; description enriches understanding of critical fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Create a new campaign' and specifies it works on active or specified ad account, returning the campaign ID and snapshot. Distinguishes from sibling tools like createAd, createAdSet, etc., by focusing on campaign-level creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes default status PAUSED for review, explains special_ad_categories requirement, and budget units. However, does not explicitly guide when to use this vs alternatives (e.g., createCampaign vs createAdSet) or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enableAdA
Idempotent
Inspect

Re-activate a paused ad (status=ACTIVE). Both the parent ad set and campaign must also be ACTIVE for the ad to deliver.

ParametersJSON Schema
NameRequiredDescriptionDefault
adIdYes
accountIdNoAccount ID (omit for primary)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (idempotentHint=true), the description adds behavioral detail: both parent entities must be active for delivery. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with no wasted words. Every part adds value: verb, resource, outcome, and precondition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple mutation tool with no output schema, the description covers the key precondition and outcome. Missing error handling or response details, but adequate given tool simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (only accountId has a description). The description adds no parameter-level details, leaving the agent to infer adId's format or usage from scratch.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 're-activate' and resource 'ad' with explicit outcome (status=ACTIVE). It distinguishes from siblings like enableAdSet and enableCampaign by focusing on the ad level.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates use when an ad is paused and includes a critical precondition (parent ad set and campaign must be ACTIVE). It doesn't explicitly list alternatives but provides sufficient context for when to invoke.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enableAdSetA
Idempotent
Inspect

Re-activate a paused Meta ad set (status=ACTIVE). The parent campaign must also be ACTIVE for delivery to resume.

ParametersJSON Schema
NameRequiredDescriptionDefault
adSetIdYes
accountIdNoAccount ID (omit for primary)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond annotations by specifying that the tool sets the ad set to ACTIVE and requires the parent campaign to be ACTIVE. Annotations already indicate idempotent and non-destructive behavior, so the description provides relevant contextual constraints without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. It front-loads the action and immediately follows with the critical condition, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple activation tool with good annotations and no output schema, the description covers the essential purpose and primary constraint. It could be slightly improved by explicitly noting that the ad set must be in paused state (implied by 're-activate'), but overall it is complete enough for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 2 parameters with 50% description coverage (accountId is described, adSetId is not). The description does not add any information about parameters, leaving adSetId undocumented. Given the moderate schema coverage, the description should have compensated by explaining the parameters but did not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (re-activate a paused ad set), the resource (Meta ad set), and the specific status change (status=ACTIVE). It also includes a key prerequisite (parent campaign must be ACTIVE), distinguishing it from sibling tools like pauseAdSet or enableCampaign.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use: to re-activate a paused ad set. It provides a critical condition (parent campaign must be ACTIVE) which implies when not to use or what alternative to consider (e.g., enableCampaign first). However, it does not explicitly list alternatives or scenarios where the tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enableCampaignA
Idempotent
Inspect

Re-enable a paused Meta campaign (status=ACTIVE). Note: Meta still requires that any underlying ad sets / ads be active for delivery to resume.

ParametersJSON Schema
NameRequiredDescriptionDefault
accountIdNoAccount ID (omit for primary)
campaignIdYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide idempotentHint and destructiveHint. The description adds a critical behavioral note about Meta's requirement for active sub-entities, which is beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with the core action first, followed by a crucial note. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but the tool is simple. The description covers the main action and a key behavioral constraint, though it lacks parameter details and explicit usage boundaries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 50%; description provides no explanation for parameters (campaignId, accountId). The optional accountId is not clarified, leaving ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool re-enables a paused campaign (status=ACTIVE). It distinguishes from siblings like enableAd and enableAdSet by specifying the campaign level.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It implies use on paused campaigns and notes that underlying entities must be active for delivery, but does not explicitly exclude other states or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fileInternalNotFairToolFeedbackAInspect

Internal NotFair tool-feedback channel. Privately report MCP/tool friction that got in the way of helping the user — unclear descriptions, missing capabilities, clunky workflows, confusing errors, or duplicate tools.

This is not customer support, not user feedback, and not a success/quality rating. It is an internal NotFair engineering signal. When tool design gets in the way of a real user task, file one concrete report here so we can fix the tool surface. The user benefits in their next session; every other agent serving every other user benefits too.

AUTO-SURFACE THIS WHEN:

  • A tool description was unclear and you weren't sure how to use it.

  • You wanted to accomplish something for the user but no tool existed for it.

  • A workflow took many tool calls when one bulk operation could have replaced them.

  • An error message returned by a tool didn't help you debug or recover.

  • Two tools have overlapping purposes and the choice was confusing.

DO NOT call this for:

  • Individual operation errors (those are tracked automatically — never call this just because a tool returned an error).

  • Confirming that a task succeeded.

  • Rating your own output quality.

  • Anything the user explicitly asked you to escalate (use the in-app feedback form for that).

Be specific. Reference tools by name and propose a concrete change. Keep yourself to at most 2 calls per session. Submissions go directly to the NotFair team; the user does not see this channel.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryYesType of feedback. description_unclear=tool docs were ambiguous; missing_capability=no tool for the intent; ergonomic=workflow took too many calls; error_message_unclear=error didn't help debugging; workflow_gap=tools couldn't be composed for the goal; duplicate_tools=two tools confusingly overlap; other=anything else.
user_goalNoWhat the user was trying to accomplish — gives the team the use case context. Avoid PII.
suggestionYesConcrete change you'd recommend.
observationYesWhat was confusing, painful, or missing. Be specific — quote what tripped you up.
affected_toolYesTool name (e.g. 'pauseKeyword'), or 'general' if cross-cutting.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes that submissions go directly to the NotFair team and the user does not see the channel, clarifying the write-operation nature. Annotations are neutral (not readOnly, destructive, or openWorld), and the description adds valuable context beyond annotations, such as the private nature and intended recipients.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear opening, bullet points for trigger scenarios, and a do-not-use section. While slightly verbose, every sentence adds value, and the structure aids readability. Could be slightly trimmed but earns its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, 4 required, enum categories), the description fully covers purpose, usage guidelines, and behavioral context. No output schema exists, so return values are not needed. Complete for an AI agent to correctly select and invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with adequate descriptions for each parameter. The description does not repeat parameter details but provides overall context. Baseline 3 is appropriate as the schema already handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is an internal feedback channel for MCP/tool friction, using the verb 'file' and specifying the resource. It distinguishes itself from customer support and user feedback, and lists specific types of reports (unclear descriptions, missing capabilities, etc.), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides 'AUTO-SURFACE THIS WHEN' and 'DO NOT call this for' sections, offering clear when-to-use and when-not-to-use guidance. Also limits usage to 2 calls per session, leaving no ambiguity about proper invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getAdAccountA
Read-only
Inspect

Snapshot of the ad account itself: id, name, currency, timezone, status, balance, amount_spent, spend_cap, disable_reason, owning Business Manager. Cheap one-call summary; pair with getInsights for performance.

ParametersJSON Schema
NameRequiredDescriptionDefault
accountIdNoAccount ID (omit for primary)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description does not need to restate safety. It adds value by describing the tool as 'cheap' (lightweight) and listing the specific returned fields, which goes beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The key information (purpose, fields, pairing suggestion) is front-loaded and efficiently delivered.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one optional parameter and no output schema, the description is complete: it explains what the tool returns, notes it is cheap, and suggests a complementary tool. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with a clear description for accountId ('Account ID (omit for primary)'). The description does not add additional meaning beyond the schema, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it provides a 'Snapshot of the ad account itself' and lists specific fields (id, name, currency, etc.). It also distinguishes from the sibling tool getInsights by noting it is for 'performance' data, making the tool's purpose very clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says it is a 'cheap one-call summary' and suggests pairing with getInsights for performance, giving context on when to use this tool. It implies it is for quick static info retrieval, but does not explicitly state when not to use or list alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getInsightsA
Read-only
Inspect

Pull performance insights for the active (or specified) ad account. Wraps /{accountId}/insights with sensible defaults: campaign-level rows over the last 30 days, audit-friendly field set. Override level, date_preset or time_range, fields, breakdowns, etc. for narrower questions. Use runScript when you need to correlate insights with delivery info, recent edits, or cross-account joins.

ParametersJSON Schema
NameRequiredDescriptionDefault
levelNoAggregation level: account, campaign, adset, or ad. Default: campaign.campaign
limitNoMax total rows returned. The tool stops paginating once it has this many.
fieldsNoInsight fields to fetch. Defaults to a sensible audit set (spend, impressions, clicks, ctr, cpc, cpm, reach, frequency, actions).
accountIdNoAccount ID (omit for primary)
breakdownsNoBreakdowns (e.g. ['country'], ['age,gender'], ['publisher_platform']).
time_rangeNoCustom date range. Mutually exclusive with date_preset.
date_presetNoPredefined window (e.g. last_7d, last_30d, last_90d, this_month, lifetime). Mutually exclusive with time_range.
time_incrementNoBucket granularity, e.g. 1 (daily), 7 (weekly), 'monthly'.
action_breakdownsNoAction breakdowns (e.g. ['action_type']).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and destructiveHint=false. The description adds context on paging behavior (limit description mentions paging up to ~20 pages) and default behavior (campaign-level, 30 days). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a purpose: stating the function, describing defaults and overrides, and directing to a sibling when needed. No fluff, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains inputs and default behavior. It mentions paging and audit-friendly fields. However, it lacks details on error handling or response format, but annotations and schema compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds value beyond the schema by noting the paging behavior for 'limit' and framing parameters as overridable. This supplements the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool pulls performance insights for ad accounts, wraps a specific API endpoint with sensible defaults, and differentiates from sibling tools like runScript by suggesting alternative usage for correlations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It clearly states when to use (pull performance insights with defaults) and provides an explicit alternative (runScript) for more complex queries. However, it could be more explicit about when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

getPagePostInsightsA
Read-only
Inspect

Aggregate engagement metrics for a Page post (typically the post backing a boosted-post ad). Returns impressions, reach, reactions, and aggregate like / comment / share counts — never individual user data. Pair with getInsights to compare paid + organic performance on a boosted post.

ParametersJSON Schema
NameRequiredDescriptionDefault
postIdYesPage post id in `<page_id>_<post_id>` form (matches `effective_object_story_id` on a boosted-post ad's creative).
metricsNoInsight metric names. Defaults to: post_impressions_unique, post_impressions_paid_unique, post_impressions_organic_unique, post_clicks, post_reactions_by_type_total.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds that the tool returns aggregate counts only, never individual user data, which is behavioral context beyond the annotations. Annotations already indicate read-only and non-destructive, and the description aligns with them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, and contains no extraneous words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema), the description covers purpose, usage, behavioral traits, and parameter context. It is complete for an agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds context linking `postId` to `effective_object_story_id` and mentions the default metrics. This enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns aggregate engagement metrics for a Page post, specifically the post backing a boosted-post ad. It distinguishes itself from sibling tools like `getInsights` by focusing on aggregate counts for a single post.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises pairing with `getInsights` to compare paid and organic performance on a boosted post, providing clear guidance on when to use this tool and how it relates to another tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listAdAccountsA
Read-only
Inspect

List Meta ad accounts connected to this session. Returns the active account id plus every selected account (id, name). Use the returned ids as accountId for other tools. For per-account currency, timezone, and Business Manager info, call getAdAccount with the id.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint true and destructiveHint false. The description adds that accounts are 'connected to this session' and describes return fields, but does not cover potential edge cases or further behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, direct, and front-loaded with the action. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description sufficiently covers purpose, usage, and output. It does not document potential errors or empty results, but is complete enough for a straightforward list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the schema is trivially self-documenting. The description adds value by explaining the output structure, which helps the agent use the returned data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists Meta ad accounts connected to the session. It specifies the resource and scope, and distinguishes from sibling tools like getAdAccount and listAds.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells the agent to use the returned IDs as accountId for other tools, providing clear usage context. It does not explicitly mention when not to use it or alternatives, but the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listAdsA
Read-only
Inspect

List ads, scoped to an account by default or to a specific ad set when adSetId is provided. Returns id, name, status, the parent ad set / campaign ids, the creative envelope, and timestamps. Use runScript for richer creative inspection (asset feed details, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax total ads returned. The tool stops paginating once it has this many.
adSetIdNoFilter to ads under this ad set. Omit to list across the whole account.
statusesNoFilter by effective_status. Default (unset): Meta returns ACTIVE + PAUSED only.
accountIdNoAccount ID (omit for primary)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds context beyond annotations: describes scoping, return fields, and no destructive impact. Annotations already mark readOnlyHint, so description complements well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose. No redundant words. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers main use cases, scoping, and return fields. Lacks mention of pagination or ordering via limit, but sufficient for a list tool with annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds meaning for adSetId (scoping) and accountId implicitly, but does not elaborate on limit or statuses parameters. Schema already covers adSetId and accountId descriptions partially.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'List ads' with explicit scope (account or ad set) and lists return fields. Distinguishes from sibling 'runScript' by scope of detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly recommends 'runScript' for richer creative inspection, providing clear alternative. Could mention when not to use (e.g., for filtering other fields) but is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listAdSetsA
Read-only
Inspect

List ad sets, scoped to an account by default or to a specific campaign when campaignId is provided. Returns id, name, status, optimization goal, billing event, bid amount/strategy, daily/lifetime budget, schedule, targeting summary, and promoted_object.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax total ad sets returned. The tool stops paginating once it has this many.
statusesNoFilter by effective_status. Default (unset): Meta returns ACTIVE + PAUSED only.
accountIdNoAccount ID (omit for primary)
campaignIdNoFilter to ad sets under this campaign. Omit to list every ad set in the account.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds scoping behavior but no additional behavioral traits like rate limits, pagination, or auth requirements. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences front-load purpose then output fields. No filler, every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description covers core functionality and output fields but omits pagination, default account behavior, and filtering details for statuses. Given annotations and no output schema, it's minimally adequate but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, and the description only adds context for campaignId and accountId (scoping). Parameters limit (with default/max/min) and statuses (enum array) are not explained, leaving a gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists ad sets, scoped by account or campaign, and enumerates returned fields. It distinguishes from sibling tools like listCampaigns and listAds by specifying scope and output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (to list ad sets for an account or campaign) but does not explicitly contrast with alternatives or provide when-not-to-use guidance. Sibling tools are present but not referenced.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listCampaignsA
Read-only
Inspect

List campaigns under the active (or specified) ad account. Returns id, name, status, objective, budget fields, bid strategy, schedule, and timestamps. For richer cross-surface analysis (campaigns × insights × ads in one pass), use runScript instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax total campaigns returned. The tool stops paginating once it has this many.
statusesNoFilter by effective_status. Default (unset): Meta returns ACTIVE + PAUSED only — pass `['ACTIVE','PAUSED','ARCHIVED','DELETED']` to include archived and deleted campaigns.
accountIdNoAccount ID (omit for primary)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds return-field details but no new behavioral traits (e.g., rate limits, side effects).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two brief sentences deliver the purpose, scope, and alternative. No wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with full schema coverage and clear annotations, the description suffices: it explains what is returned, how to scope (accountId), and when to use a sibling instead.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with each parameter described in the schema. The description adds no additional parameter-level meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists campaigns under an ad account, enumerates returned fields, and distinguishes from runScript for broader analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly suggests using runScript for richer cross-surface analysis, offering a clear alternative. However, it does not detail prerequisites or when not to use it beyond that.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

listPagesA
Read-only
Inspect

List the Facebook Pages the connected user manages, so the agent can pick a Page identity for ad creatives (every ad's object_story_spec.page_id requires a Page the user has rights to). Returns id + name only — does NOT read Page content, posts, comments, or engagement. Optional businessId also includes Pages owned by that Business Manager.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax total Pages returned across both sources.
businessIdNoBusiness Manager id (numeric, no prefix). When set, also returns Pages owned by that business.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so the description adds value by specifying the exact output (id + name) and the optional businessId behavior. It does not contradict annotations and provides useful behavioral context beyond the structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each serving a clear purpose: state the core function, clarify scope and limitations, and note an optional parameter. It is concise and front-loaded with the most important information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description sufficiently explains the return type (id + name only) and the effect of the businessId parameter. For a simple list tool with two optional parameters, the description covers the necessary aspects for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description reinforces the businessId parameter's effect but doesn't add significant new meaning. Baseline 3 is appropriate given the schema already does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists Facebook Pages managed by the user, specifically for selecting a Page identity for ad creatives. It distinguishes itself by noting it only returns id and name, and does not read other content, setting it apart from sibling tools focused on ads and insights.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear context for when to use: to obtain a Page ID for ad creatives. It also tells what it does not do (read Page content, posts, etc.), though it doesn't explicitly mention alternative tools for those tasks. The context is sufficient for an agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pauseAdA
Idempotent
Inspect

Pause a single ad (sets the ad's status=PAUSED — does not modify its creative). Reversible via enableAd.

ParametersJSON Schema
NameRequiredDescriptionDefault
adIdYes
accountIdNoAccount ID (omit for primary)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate it is not read-only, not destructive, and idempotent. The description adds that it sets status to PAUSED and is reversible, confirming mutation behavior. However, it does not disclose potential side effects or conditions like whether the ad must be active.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: one sentence with a parenthetical clarification. No unnecessary words, and it conveys the core purpose and a key behavioral note.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having no output schema, the description does not mention the return value or success/failure indication. It also lacks prerequisites (e.g., ad must exist, must not already be paused). For a mutation tool, this is insufficient completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (only accountId has a description). The tool description does not add any extra meaning to the parameters, such as explaining what 'adId' represents or how to obtain it. The schema itself partially covers accountId, but the description should compensate for the missing 'adId' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Pause') and resource ('a single ad'), and specifies the resulting status ('PAUSED'). It effectively distinguishes from sibling tools like 'pauseAdSet' and 'enableAd'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions reversibility via 'enableAd', providing a clear alternative. However, it does not explicitly state when to use this tool versus other pause-related siblings like 'pauseAdSet' or 'pauseCampaign', nor does it list prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pauseAdSetA
Idempotent
Inspect

Pause a Meta ad set (status=PAUSED). Pausing an ad set leaves the parent campaign untouched. Reversible via enableAdSet.

ParametersJSON Schema
NameRequiredDescriptionDefault
adSetIdYes
accountIdNoAccount ID (omit for primary)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds behavioral context beyond annotations: pausing leaves the campaign untouched and is reversible. Annotations already indicate idempotency (idempotentHint=true) and non-destructiveness (destructiveHint=false), so the description complements well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with the action, and every sentence adds value. No superfluous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple mutation tool with no output schema, the description covers purpose, effects, and reversibility. However, missing parameter details (adSetId) is a gap, but overall sufficient given low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (only accountId has a description). The tool description does not explain the parameters, leaving adSetId undocumented. This is a significant gap for an agent to know what to provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action (pause), the target resource (Meta ad set), the resulting status (PAUSED), and distinguishes from siblings by noting it leaves the parent campaign untouched and is reversible via enableAdSet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for pausing an ad set without affecting the campaign and provides a sibling for reversal. However, it does not explicitly exclude alternatives like pauseAd or pauseCampaign, though naming and context make it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pauseCampaignA
Idempotent
Inspect

Pause a Meta campaign by setting status=PAUSED. Reversible via enableCampaign. Returns before/after status snapshots so the agent can confirm the change.

ParametersJSON Schema
NameRequiredDescriptionDefault
accountIdNoAccount ID (omit for primary)
campaignIdYesCampaign id (numeric, no prefix).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate idempotent, non-destructive, and non-read-only. The description adds that it is reversible and returns before/after status snapshots, providing useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with critical information front-loaded. No fluff, every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-param tool with good annotations and no output schema, the description covers purpose, reversibility, and confirmation. Could mention rate limits or side effects, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description doesn't need to add much parameter info. It correctly focuses on the action, but doesn't enhance the parameter understanding beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Pause a Meta campaign') and the mechanism ('setting status=PAUSED'), distinguishing it from siblings like pauseAd or pauseAdSet, which target different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions reversibility via `enableCampaign`, guiding the agent on when to use this tool vs the sibling for enabling. It could be clearer about when not to use, but the context is adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

renameAdA
Idempotent
Inspect

Rename an ad (set its name field). Works on every ad type the user has rights to, including boosted-Page-post ads where status writes are blocked.

ParametersJSON Schema
NameRequiredDescriptionDefault
adIdYesNumeric ad id.
nameYes
accountIdNoAccount ID (omit for primary)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond annotations by explaining that the rename works even on boosted-Page-post ads where status writes are blocked, and that it is the canonical `pages_manage_ads` write. This provides authorization context and edge-case behavior not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, each dense with information: first states action and scope, second adds permission mapping and edge case. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple rename operation with no output schema, the description covers the key aspects: action, scope, permission, and a special case (boosted-Page-post ads). It does not mention return values or error handling, but those are less critical for a straightforward write.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67% with adId and accountId described, name constrained. The description mentions the `name` field but adds no additional semantic detail beyond the schema. Baseline 3 is appropriate as description does not compensate for the missing 33% coverage of the name parameter's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool renames an ad by setting its `name` field, specifies it works on all ad types the user has rights to, including boosted-Page-post ads, and distinguishes it from sibling tools like renameCampaign by focusing on ads. The verb 'rename' and resource 'ad' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells when to use the tool (renaming ads) and when it works (all ad types with rights, including those where status writes are blocked). It indirectly suggests alternatives by mentioning the canonical `pages_manage_ads` write, but does not explicitly list alternatives like 'renameCampaign' for campaigns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

renameCampaignA
Idempotent
Inspect

Rename a campaign (sets the name field).

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
accountIdNoAccount ID (omit for primary)
campaignIdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover idempotency and non-destructiveness; description adds that the tool specifically sets the `name` field. Does not elaborate on permissions, side effects, or return behavior beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with no wasted words. Every part earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple rename operation given annotations, but lacks details on success/error behavior, return value, or prerequisites. Minimal completeness for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is low (33% with accountId described). Description explains the `name` parameter's role (the new name) but does not compensate for undocumented parameters like campaignId or clarify accountId beyond schema's existing description. Overall insufficient for low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Rename a campaign' with specific verb and resource, and adds detail about setting the `name` field. Easily distinguishes from sibling tools like enableCampaign or updateCampaignBudget.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly clear when to use (when renaming a campaign), but no explicit guidance on when not to use or alternatives among siblings. No exclusionary context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runScriptA
Read-only
Inspect

Run a JavaScript orchestration script in a sandboxed QuickJS runtime against the Meta Marketing API (Facebook + Instagram Ads). One runScript call can replace 10+ sequential Graph API tool invocations.

── WHEN TO USE THIS ──

Default tool for any open-ended analytical question about a Meta ad account. Reach for it first when you see:

  • "How is my campaign doing?" / "What's working?" / "Find ad sets with bad ROAS" / "Why did CPM spike last week"

  • "Audit my account" / "Rank ad sets by spend efficiency" / "Compare creatives"

  • Any question where you'd otherwise call 3+ Graph endpoints in sequence

  • Any question that benefits from correlating insights + delivery info + recent edits in a single pass

runScript owns reads — there are no per-surface read tools. Use getInsights only for the dedicated 1-account-1-window pull when you don't need to correlate.

── BATCHING DISCIPLINE ──

Prefer ONE runScript call that fans out via ads.graphParallel (up to 20 calls concurrently). Cast a wide net on the first call; filter in-script for free.

── API SURFACE (all on the ads namespace) ──

Async RPCs:

  • ads.graph(path, params?, method?) -> JSON — single Graph API call. Path may use the {accountId} template token (replaced with the active act_<id>). Default method: GET.

  • ads.graphParallel([{ name, path, params?, method?, paged?, limit? }]) -> { [name]: { ok, data } | { ok: false, error } } — fan-out, max 20.

    • Set paged: true to follow paging.next (capped at 20 pages). limit trims the final list to N rows.

  • ads.insights(adAccountId?, options?) -> rows — wrapper over /{accountId}/insights with sensible defaults. Pass null for the active account.

    • options: { level: "account"|"campaign"|"adset"|"ad", date_preset, time_range:{since,until}, time_increment, fields, breakdowns, action_breakdowns, limit }

  • ads.batch([{ method, relative_url, body? }]) -> [{ code, body }] — Graph API /batch endpoint. Up to 50 sub-requests.

  • ads.pagedAll(path, params?, maxPages?) -> [...] — read every page of a paged endpoint.

Sync helpers:

  • ads.helpers.getDateRange(days) -> { since, until } — YYYY-MM-DD strings, UTC.

  • ads.helpers.formatDate(date) | daysBetween(a,b) | withActPrefix(id) | stripActPrefix(id)

Constants:

  • ads.activeAccountId — the active ad-account numeric id (no act_ prefix).

  • ads.fields.* — comma-joined field-list strings: campaign, adset, ad, adAccount, insightsAudit, insightsLite. Drop into params.fields.

  • ads.datePresets — array of preset strings accepted by /insights date_preset.

Path templates:

  • "/{accountId}/campaigns" → "/act_/campaigns"

  • "/{accountId}/insights" → "/act_/insights"

  • Plain ids like "/me/adaccounts" are untouched.

── COMMON PATTERNS ──

Single insights pull:

return await ads.insights(null, {
  level: "campaign",
  date_preset: "last_30d",
  fields: ads.fields.insightsAudit.split(","),
});

Audit fan-out — campaigns + ad sets + ads + last 30d insights, in one call:

const r = await ads.graphParallel([
  { name: "campaigns", path: "/{accountId}/campaigns", params: { fields: ads.fields.campaign }, paged: true },
  { name: "adsets",    path: "/{accountId}/adsets",    params: { fields: ads.fields.adset }, paged: true },
  { name: "ads",       path: "/{accountId}/ads",       params: { fields: ads.fields.ad }, paged: true, limit: 200 },
  { name: "insights",  path: "/{accountId}/insights",  params: { level: "campaign", date_preset: "last_30d", fields: ads.fields.insightsAudit }, paged: true },
]);
const worst = (r.insights.ok ? r.insights.data : []).filter(x => Number(x.spend) > 100 && Number(x.ctr) < 0.5);
return { worstCampaigns: worst, totals: { campaigns: r.campaigns.rowCount, adsets: r.adsets.rowCount } };

── RULES ──

  • Top-level await works. No fetch / require / process / fs reachable.

  • Return value must be JSON-serializable. Limits: 30s timeout (max 45s), 500KB return cap, 100K log chars.

  • Mutations (pause/enable/budget) go through dedicated tools (pauseCampaign, pauseAdSet, pauseAd, ...). Never write through runScript.

── ANTI-PATTERNS ──

  • Calling runScript 5+ times to fetch different surfaces — that's what graphParallel replaces.

  • Returning entire data arrays — summarize, rank, or aggregate first.

  • Manually computing dates with new Date() math — use ads.helpers.getDateRange / formatDate.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesJavaScript source. Top-level await allowed. See tool description for the API surface.
accountIdNoAccount ID (omit for primary)
timeoutMsNoWall-clock cap before the script is interrupted. Default 30s, max 45s.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, non-destructive) are confirmed. Description adds critical constraints: sandboxed, no fetch/require, 30s timeout, 500KB return cap, 100K log chars. Also details API surface and mutation prohibition.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (WHEN TO USE, API SURFACE, COMMON PATTERNS, RULES). Front-loaded with key purpose and contrasts. Every sentence adds value; no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex script execution tool with no output schema, the description is remarkably complete: covers constraints, API surface, examples, anti-patterns, and limits. No gaps left.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all 3 params with descriptions. Description enriches 'code' param with full API surface and examples, but 'accountId' and 'timeoutMs' are handled by schema. Baseline 3, elevated due to extensive context for code.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it runs JavaScript in a sandboxed QuickJS runtime against Meta Marketing API. Distinguishes from siblings by positioning as default tool for analytical questions, replacing multiple Graph calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-to-use (analytical questions, auditing) and when-not-to-use (mutations go to dedicated tools). Provides batching discipline and anti-patterns, guiding agent toward efficient invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

updateAdCreativeA
Idempotent
Inspect

Swap the creative on an existing ad to a different creative. The new creative must already exist (call createAdCreative first to mint one). Useful for A/B testing or refreshing fatigued creative without rebuilding the ad set.

ParametersJSON Schema
NameRequiredDescriptionDefault
adIdYes
accountIdNoAccount ID (omit for primary)
creative_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate the tool is write-only (readOnlyHint=false), idempotent (idempotentHint=true), and non-destructive (destructiveHint=false). The description adds the precondition about creative existence, but does not disclose behavioral traits beyond annotations, such as authorization or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose. Every sentence is essential: the first states the action, the second adds preconditions and use cases. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 params, no output schema), the description covers the core purpose and precondition but lacks parameter guidance. It is adequate for basic use but incomplete for an agent that needs to know parameter meanings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 33% (only 'accountId' has a description). The description does not explain the required parameters 'adId' or 'creative_id', nor their formats. It mentions 'creative' but not the parameter name, so it adds minimal meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Swap the creative') and the resource ('existing ad'), using specific verbs and nouns. It distinguishes itself from sibling tools like 'createAdCreative' by focusing on swapping existing creatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool ('A/B testing or refreshing fatigued creative'), and includes a precondition ('The new creative must already exist') with a reference to 'createAdCreative'. While it lacks explicit when-not-to-use, it offers sufficient context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

updateAdSetA
Idempotent
Inspect

Update one or more ad-set fields beyond status / budget. Covers targeting, optimization_goal, billing_event, bid_amount/bid_strategy, schedule (start_time/end_time), and Advantage+ promoted_object. Pass only the fields that are changing. For simpler edits, prefer pauseAdSet / enableAdSet / updateAdSetBudget.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNo
statusNo
adSetIdYes
end_timeNoISO 8601.
accountIdNoAccount ID (omit for primary)
targetingNoReplaces the entire targeting spec. Provide the full object — Meta does not merge.
bid_amountNo
start_timeNoISO 8601.
bid_strategyNo
daily_budgetNo
billing_eventNo
lifetime_budgetNo
promoted_objectNo
optimization_goalNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations show idempotentHint=true, not destructive. Description adds key behavioral info: 'Pass only the fields that are changing' and note that targeting replaces entire spec (no merge). Could elaborate on idempotency but sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second lists what it covers and alternatives. Front-loaded, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 14 parameters and nested objects, description covers key aspects: fields affected, non-merging targeting behavior, and guidance to pass only changing fields. Lacks return format or error handling, but no output schema exists, so acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is low (29%), but description lists important parameter groups (targeting, optimization_goal, etc.) and mentions that budgets are handled separately via updateAdSetBudget. No param-by-param detail, but the summary covers the main payload categories.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool updates 'one or more ad-set fields beyond status / budget', listing specific fields (targeting, optimization_goal, etc.), and distinguishes from simpler sibling tools like pauseAdSet, enableAdSet, updateAdSetBudget.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use (beyond status/budget) and when-not-to (prefer simpler tools for simpler edits), naming alternatives. This clearly guides selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

updateAdSetBudgetA
Idempotent
Inspect

Update an ad set's daily or lifetime budget. Pass exactly one of dailyBudget or lifetimeBudget, in account-currency MINOR units. Note: Meta blocks this when the parent campaign uses Campaign Budget Optimization (CBO).

ParametersJSON Schema
NameRequiredDescriptionDefault
adSetIdYes
accountIdNoAccount ID (omit for primary)
dailyBudgetNo
lifetimeBudgetNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotentHint=true and destructiveHint=false. The description adds important behavioral details: mutual exclusivity of budget params, minor units, and CBO blocking. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, concise, and well-structured. The critical note about CBO is front-loaded. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description omits return value details. However, it covers key usage constraints (params, units, CBO) adequately for a mutation tool. Slightly incomplete for a complete picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is low (25%). Description adds meaning beyond schema by clarifying the mutual exclusivity of dailyBudget and lifetimeBudget, and specifying that values are in minor units. This compensates partially for missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool updates an ad set's budget, with specific verb and resource. It distinguishes from sibling tools like updateCampaignBudget by specifying 'ad set' and 'budget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: pass exactly one of dailyBudget or lifetimeBudget, and warns about blocking when parent campaign uses CBO. It implies when to use and when not to use, though it doesn't list alternative tools explicitly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

updateCampaignA
Idempotent
Inspect

Update one or more campaign fields beyond status / budget / name. Use this for bid strategy, start/stop time, or special_ad_categories changes. For simpler edits prefer pauseCampaign / enableCampaign / updateCampaignBudget / renameCampaign.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNo
statusNo
accountIdNoAccount ID (omit for primary)
stop_timeNoISO 8601.
campaignIdYes
start_timeNoISO 8601.
bid_strategyNo
daily_budgetNo
lifetime_budgetNo
special_ad_categoriesNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotentHint=true and destructiveHint=false. The description correctly implies modification but adds no additional behavioral details like permission requirements, partial update behavior, or error handling. It adds moderate value by specifying which fields are updated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with precise purpose, then usage guidance. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters, low schema coverage, and no output schema, the description is incomplete. It omits return values, error states, and behavior when fields are omitted. However, it provides adequate high-level context for selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 30% (only accountId, stop_time, start_time have descriptions). The description references bid_strategy, start_time, stop_time, and special_ad_categories, adding meaning beyond the schema for these. However, it does not cover all 10 parameters (e.g., daily_budget, lifetime_budget, name, status are mentioned only indirectly). Partially compensates for low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the verb 'Update' with specific resources ('campaign fields beyond status / budget / name') and lists concrete use cases (bid strategy, start/stop time, special_ad_categories). It differentiates from siblings by contrasting with pauseCampaign/enableCampaign/updateCampaignBudget/renameCampaign.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'for bid strategy, start/stop time, or special_ad_categories changes'. Also tells when not to: 'For simpler edits prefer pauseCampaign / enableCampaign / updateCampaignBudget / renameCampaign'. Provides clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

updateCampaignBudgetA
Idempotent
Inspect

Update a campaign's daily or lifetime budget. Pass exactly one of dailyBudget or lifetimeBudget. Values are in the ad account's currency MINOR units (cents for USD, etc.) — Meta's native unit, no conversion done. Use getAdAccount if you need the currency first.

ParametersJSON Schema
NameRequiredDescriptionDefault
accountIdNoAccount ID (omit for primary)
campaignIdYes
dailyBudgetNoNew daily budget in account currency MINOR units (e.g. 5000 = $50.00 USD). Mutually exclusive with lifetimeBudget.
lifetimeBudgetNoNew lifetime budget in account currency MINOR units. Mutually exclusive with dailyBudget.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotentHint=true and destructiveHint=false. The description adds that budgets use 'account currency MINOR units' and 'no conversion done,' providing context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: purpose, usage rule, and unit warning. No unnecessary words, efficient delivery.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers budget types, mutual exclusivity, units, and a hint to check currency. Missing potential details about response format or constraints, but sufficient for typical update operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 75% with descriptions for dailyBudget and lifetimeBudget. The description reinforces mutual exclusivity and clarifies unit meaning, adding value over schema alone. The accountId parameter is mentioned indirectly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Update a campaign's daily or lifetime budget,' specifying the verb (update) and resource (campaign budget). This distinguishes it from sibling tools like updateAdSetBudget.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Pass exactly one of dailyBudget or lifetimeBudget' and suggests using getAdAccount for currency. However, it does not explicitly contrast with sibling tools like updateAdSetBudget, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.