Skip to main content
Glama

xpay✦ Marketing Collection

Server Details

30+ marketing tools from Brand.dev, Exa, Tavily, and Ideogram. Keyword research, brand monitoring, social scraping, and marketing image generation. $0.01/call.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsD

Average 2/5 across 124 of 124 tools scored. Lowest: 1.4/5.

Server CoherenceC
Disambiguation2/5

The tool set has significant overlap and ambiguity, particularly within the 'scrapecreators_' prefix where many tools appear to target similar social media platforms and content types (e.g., 'scrapecreators_posts', 'scrapecreators_posts_get', 'scrapecreators_post', 'scrapecreators_post_get'). Additionally, tools like 'tavily_research' and 'tavily_search' have overlapping purposes with 'web_search_exa', making it difficult for an agent to distinguish when to use each. While some tools like 'get_credits' or 'ideogram_v3' are distinct, the overall set is confusing due to redundant functionalities.

Naming Consistency2/5

Naming conventions are highly inconsistent across the tool set. There is a mix of snake_case (e.g., 'get_credits'), kebab-case (e.g., 'find-hooks'), and verbose prefixes (e.g., 'scrapecreators_'). The 'scrapecreators_' tools themselves vary in structure, with some using underscores and others not, and there are duplicate names with slight variations (e.g., 'scrapecreators_ad_details' vs. 'scrapecreators_ad_details_get'). This lack of a predictable pattern makes the tool set chaotic and hard to navigate.

Tool Count1/5

With 124 tools, the count is extremely high and inappropriate for the server's purpose, which appears to be marketing and social media data collection. This many tools suggests poor scoping, likely due to redundancy (e.g., multiple scraping tools for similar platforms) and overlapping functionalities. A well-scoped server in this domain should have far fewer tools, typically in the range of 10-30, to avoid overwhelming agents and ensure clarity.

Completeness4/5

Despite the high tool count and redundancy, the server covers a broad range of marketing-related functions comprehensively. It includes tools for social media hooks, content validation, SEO analysis (e.g., backlinks, keywords), voice archetypes, copywriting frameworks, and extensive scraping across multiple platforms. There are no obvious major gaps for the marketing domain, as it supports data gathering, content creation, and analysis across various networks and metrics, allowing agents to perform core marketing workflows effectively.

Available Tools

124 tools
find-hooksBInspect

Find social media hooks by network and/or category

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of hooks to return
networkNoSocial media network (twitter, instagram, linkedin, tiktok, youtube)
categoryNoHook category (engagement, educational, promotional, storytelling, etc.)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, but offers minimal information. It does not indicate whether hooks are returned randomly or deterministically, what happens if no matches exist, or what format the response takes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of eight words that front-loads the action verb. Every word earns its place with zero redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (three flat parameters, 100% schema coverage) and lack of output schema, the description is minimally adequate. However, it could improve by briefly explaining what constitutes a 'hook' in this context or noting that results are paginated via the limit parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all three parameters adequately. The description adds implicit value by using 'and/or' to suggest these are optional filters, matching the schema's zero required parameters, but does not elaborate on valid values or syntax beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Find') and clear resource ('social media hooks'), and identifies the filtering dimensions ('by network and/or category'). However, it does not explicitly distinguish from sibling tool 'get-network-categories-for-hooks', which returns metadata rather than hooks themselves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'get-network-categories-for-hooks' or the various 'scrapecreators' content tools. There are no prerequisites, exclusions, or conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flag-problematic-phrasesAInspect

Check text for phrases that should be avoided to make content more human and less AI-like. Returns any flagged phrases found in the text.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text to check for problematic phrases
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the return value ('Returns any flagged phrases found in the text') to compensate for the missing output schema. However, it omits explicit read-only safety guarantees or details about what constitutes a 'problematic' phrase.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total, both earning their place: the first establishes purpose and scope, the second discloses the return behavior. Front-loaded with the action verb and no redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter validation tool without output schema or annotations, the description is adequately complete. It compensates for the missing output schema by describing the return value, and the 100% schema parameter coverage handles input semantics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing a complete baseline. The description mentions 'Check text' which aligns with the schema but does not add additional semantic details (format requirements, length limits, encoding) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Check') and resource ('text') with a clear value proposition ('make content more human and less AI-like'). It effectively distinguishes this content validation tool from SEO and social media scraping siblings in the tool list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While there are no explicit 'when to use' statements or named alternatives, the specific purpose ('avoided to make content...less AI-like') provides clear implied usage context. The agent can infer this is for editing/refining content before publication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-archetypeAInspect

Get detailed information about a specific voice archetype including tweet examples

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesThe archetype name (e.g., 'AUTHORITY', 'COMEDIAN', 'HUSTLER')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses that the response includes 'tweet examples,' adding context about output content. However, it omits safety characteristics (read-only status), error behaviors (not found scenarios), or rate limit considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence with zero redundancy. Every phrase earns its place: 'Get' (action), 'detailed information' (depth), 'specific voice archetype' (resource), and 'including tweet examples' (distinguishing output feature).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 string parameter, no nested objects) and lack of output schema, the description provides minimum viable context by hinting at output content via 'tweet examples.' However, it leaves 'detailed information' undefined and doesn't clarify what other fields might be returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing the parameter name, type, and examples ('AUTHORITY', 'COMEDIAN'). The description adds no explicit parameter semantics, but baseline 3 is appropriate when the schema is already comprehensive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Get'), resource ('voice archetype'), and scope ('detailed information...including tweet examples'). It distinguishes from sibling 'list-archetypes' by emphasizing 'specific' and details the unique content type returned (tweet examples), though it could explicitly name the sibling contrast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the phrase 'specific voice archetype'—suggesting this requires a known identifier versus listing all archetypes—but lacks explicit guidance on when to use this versus 'list-archetypes' or prerequisites like needing to know valid archetype names beforehand.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-copywriting-frameworkCInspect

Get detailed information about a specific copywriting framework for a network

ParametersJSON Schema
NameRequiredDescriptionDefault
networkYesSocial media network (twitter, instagram, linkedin, tiktok, youtube, facebook)
frameworkYesFramework name (aida, pas, bab, 4cs, uuuu, pppp, slap, app, storybrand)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. It states the tool retrieves 'detailed information' but does not clarify what that includes (templates, examples, structure?), nor does it mention caching, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of 11 words that leads with the action verb. While appropriately sized for the input schema complexity, the extreme brevity leaves gaps given the lack of output schema and annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter structure with complete schema coverage, the description is minimally adequate. However, it lacks explanation of return values (no output schema exists) and fails to contextualize the tool's relationship to listing alternatives, leaving operational gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameters are fully documented in the schema itself. The description references both 'network' and 'framework' concepts but adds no additional semantic value (format details, valid combinations) beyond what the schema already provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and clearly identifies the resource (copywriting framework) and scope (for a network). It implicitly distinguishes from sibling 'list-copywriting-frameworks' by emphasizing 'specific'. However, it lacks detail on what 'detailed information' entails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'list-copywriting-frameworks'. It fails to mention the prerequisite of knowing valid framework names beforehand or when retrieval versus listing is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_countriesCInspect

Get list of supported countries

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but discloses nothing about return format (ISO codes vs. names), data freshness, list size, or rate limiting. The agent cannot infer if this is static data or dynamically filtered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words; appropriately brief for a simple enumeration tool. However, the extreme brevity comes at the cost of omitting useful context, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter list tool, but lacks description of return structure (objects vs. strings) and meaning of 'supported' in this specific domain. Without output schema or annotations, the description should have included at least a hint about the return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters exist (empty schema), so per calibration guidelines, baseline is 4. The description appropriately does not invent parameters, and the schema coverage is trivially complete at 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a basic verb ('Get') and resource ('list of supported countries'), but 'supported' lacks domain context—supported for what purpose? It does not differentiate from siblings like get_currencies or clarify the service domain, leaving the scope ambiguous despite being grammatically clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives (e.g., whether this applies to shipping, billing, or content targeting). No mention of prerequisites, caching considerations, or related sibling tools that might return country-related data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_creditsBInspect

Get your account's credit balance

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it fails to mention whether this is a read-only operation, what format the balance is returned in, rate limits, or whether the balance is real-time or cached. 'Get' implies reading but lacks explicit safety or behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence that immediately communicates the tool's purpose without redundant words or wasted space. It is appropriately front-loaded with the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters) and lack of output schema or annotations, the description minimally suffices by identifying what is retrieved. However, it should ideally describe the return value format (e.g., numeric balance, currency) since no output schema exists to document this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4 per the evaluation rules. The description correctly implies no user input is needed by focusing solely on the retrieval action without referencing parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get) and resource (account's credit balance). However, it uses the generic verb 'Get' rather than a more specific alternative like 'Retrieve', and does not explicitly differentiate from siblings, though the credit-related functionality is unique among the listed tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites such as authentication requirements or when credit checks should be performed. It solely states what the tool does without contextual usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_currenciesBInspect

Get list of supported currencies

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description doesn't clarify whether this returns a static or dynamic list, what fields are included (codes, symbols, names), rate limiting, or whether the data is cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately brief at five words, with no redundant or wasteful language. However, it stops short of being maximally useful—a single additional phrase indicating the return format or scope would improve utility without sacrificing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple parameterless tool, the description covers the bare minimum. However, given the lack of an output schema, it should ideally describe what the returned list contains (e.g., currency codes, full names, symbols) and the approximate size of the dataset to set caller expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which establishes a baseline score of 4 per the rubric. The description appropriately doesn't invent parameters that don't exist, though it could mention that no filtering is required or supported.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Get') and resource ('list of supported currencies'), making the basic function obvious. However, it lacks specificity about what system or service these currencies are 'supported' by, and doesn't explicitly differentiate from siblings like get_countries or get_credits, though the resource name implies the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites, caching considerations, or data freshness. Given it's a parameterless lookup tool, some context about when this should be called (e.g., before currency conversion operations) would be helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_domain_keywordsCInspect

Get keywords that a domain ranks for

ParametersJSON Schema
NameRequiredDescriptionDefault
numNoNumber of results to return (max 1000)
domainYesDomain to analyze (e.g., example.com)
countryNoCountry code (empty string for Global, 'us' for United States, etc.)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to deliver. It does not explain what 'ranks for' entails (position data, search volume, SERP features), data freshness, rate limits, or the structure of the returned keyword list. The only behavioral hint is the implied read-only nature of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only six words, with no filler or redundancy. It is appropriately front-loaded with the action verb. However, it may be excessively terse given the lack of supporting annotations or output schema, though this concerns completeness rather than conciseness itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description is insufficiently complete. It fails to describe the return values (what data points accompany each keyword?), pagination behavior, or geographic constraints beyond the country parameter. For a tool with multiple siblings, additional context about the data source or ranking methodology would be necessary for robust agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all three parameters (domain, num, country). The description adds no additional parameter semantics beyond the schema, but given the high schema coverage, this meets the baseline expectation without penalty or credit.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and clearly identifies the resource ('keywords that a domain ranks for'), which effectively communicates the SEO research purpose. However, it does not explicitly differentiate from siblings like 'get_keyword_data' or 'get_url_keywords', relying on the tool name to convey the domain-level scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'get_url_keywords' (page-level) or 'get_related_keywords'. There are no stated prerequisites, exclusions, or workflow recommendations to help the agent select this tool correctly from the extensive list of keyword-related siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_domain_trafficCInspect

Get traffic metrics for a domain

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain to analyze (e.g., example.com)
countryNoCountry code (empty string for Global, 'us' for United States, etc.)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Fails to disclose whether this is read-only, what specific traffic data is returned (visits, pageviews, unique visitors), or any rate limiting. Only implies a retrieval operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely brief at five words. While efficient and front-loaded, the extreme brevity leaves significant gaps given the lack of output schema or annotations. No wasted words, but arguably undersized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema or annotations, the description should explain what traffic metrics are returned or behavioral constraints. As is, it leaves the agent blind to the return format and data scope despite the simple 2-parameter input.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for both parameters ('domain' and 'country'), so the schema carries the semantic weight. Description adds no parameter-specific context beyond the schema, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the basic action ('Get') and resource ('traffic metrics for a domain'), but lacks specificity on what distinguishes this from sibling tool 'get_url_traffic' or what metrics are included. Adequate but minimal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus 'get_url_traffic' or other domain analysis tools like 'get_domain_keywords'. No prerequisites or conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_keyword_dataBInspect

Get Volume, CPC and competition for a set of keywords

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoCountry code (empty string for Global, 'us' for United States, etc.)
currencyNoCurrency code (e.g., 'myr' for Malaysian Ringgit)myr
keywordsYesList of keywords to analyze
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses what data is returned (Volume, CPC, competition), but omits operational details such as data freshness, API costs/credits consumed, rate limits, or error handling behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, nine words, front-loaded with action verb 'Get'. Zero redundancy or waste. Despite brevity that impacts completeness, the structure is exemplary for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacking both output schema and annotations, the description only partially compensates by listing the three metrics returned. Missing context includes: whether metrics are estimates or exact, geographic coverage implications, and the structure/format of the returned data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (country, currency, keywords), establishing a baseline score of 3. The description adds no parameter-specific guidance (e.g., valid country code formats, optimal keyword list size), relying entirely on the schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific metrics retrieved (Volume, CPC, competition) and the target (keywords). However, it does not explicitly distinguish this metric-retrieval tool from sibling discovery tools like 'get_related_keywords' or 'get_pasf_keywords' that may also return keyword data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. The description fails to indicate that this is for analyzing provided keywords versus discovering new ones (contrast with get_related_keywords) or analyzing domain-specific keywords (contrast with get_domain_keywords).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-network-categories-for-hooksCInspect

Get all available categories for a specific social media network

ParametersJSON Schema
NameRequiredDescriptionDefault
networkYesSocial media network name (twitter, instagram, linkedin, tiktok, youtube)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. It fails to describe what the categories look like (enums vs. free text), approximate volume, or whether results are cached. The read-only nature is implied by 'Get' but not confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 9 words is appropriately brief and front-loaded with the verb 'Get'. No redundant filler, though brevity comes at the cost of omitting the 'hooks' context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with complete schema coverage and no output schema, the description is minimally viable. However, it misses the domain context that these are content 'hook' categories (angles/templates), which is essential for an agent to select this over other category-fetching tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with specific examples (twitter, instagram, linkedin, tiktok, youtube), so the description does not need to add parameter semantics. It baseline meets expectations by mentioning 'social media network'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it gets 'categories for a specific social media network' but critically omits 'hooks' (from the tool name), leaving the domain purpose ambiguous. It does not explain what these categories are used for or how they differ from the 'find-hooks' sibling tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus 'find-hooks' or other content generation tools. No prerequisites, rate limit warnings, or sequencing advice (e.g., whether to call this before finding hooks).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pasf_keywordsBInspect

Get 'People Also Search For' keywords based on a seed keyword

ParametersJSON Schema
NameRequiredDescriptionDefault
numNoNumber of results to return (max 1000)
keywordYesSeed keyword to find PASF terms for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only implies read-only safety via the verb 'Get'. It fails to disclose rate limits, caching behavior, authentication requirements, or the structure/format of returned data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient 9-word sentence that front-loads the action ('Get') and immediately identifies the resource. There is no redundant or wasted language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter tool with flat structure, the description adequately identifies the core function. However, given the absence of an output schema and annotations, it lacks necessary context about what data structure or fields the tool returns, leaving the agent uncertain about the output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('Seed keyword...' and 'Number of results...'), so the description is not required to compensate. It mentions 'seed keyword' which aligns with the schema, adding no additional syntax, format examples, or semantic constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'People Also Search For' keywords using a specific SEO term (PASF) that distinguishes it from the sibling tool 'get_related_keywords'. However, it doesn't explicitly clarify when to choose this over similar keyword tools like 'get_keyword_data'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'get_related_keywords' or 'get_keyword_data'. There are no stated prerequisites, exclusions, or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-text-before-foldCInspect

Truncate text to fit within the 'before fold' character limits for each social media platform for previewing purposes

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text content to truncate
platformYesSocial media platform
contentTypeNoContent type (only relevant for YouTube: 'title' or 'description')post
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to specify the return format (string vs object), whether truncation indicators (ellipsis) are added, or if the operation is idempotent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the core function. It is slightly wordy with 'for previewing purposes' at the end, but generally avoids waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter schema with full coverage and no output schema, the description adequately covers the primary use case. However, it should disclose the return value format and truncation behavior to be complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear enums. The description provides the high-level context but does not add semantic details beyond what the schema already documents (e.g., it doesn't clarify the YouTube-specific contentType behavior).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Truncate') and resource ('text'), clearly indicating it shortens content to fit platform-specific 'before fold' limits. However, it does not explicitly differentiate from the similar sibling 'validate-content-before-fold'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it mentions the context ('for previewing purposes'), it provides no guidance on when to use this tool versus alternatives, particularly the sibling 'validate-content-before-fold' which likely checks limits without truncating.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_url_keywordsCInspect

Get keywords that a URL ranks for

ParametersJSON Schema
NameRequiredDescriptionDefault
numNoNumber of results to return (max 1000)
urlYesURL to analyze
countryNoCountry code (empty string for Global, 'us' for United States, etc.)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for behavioral disclosure but provides none. It doesn't indicate whether this is a read-only operation, mention rate limits, data freshness, or what the return structure contains. The word 'Get' implies safety but doesn't confirm it.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at six words. No wasted language and the key verb appears first. However, given the absence of annotations and output schema, this brevity may constitute under-specification rather than efficient communication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a tool with no annotations and no output schema. With many similar SEO siblings (get_domain_keywords, get_url_traffic, get_keyword_data), the description fails to disambiguate use cases or hint at return value structure. Should explain what 'ranks for' means in this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, documenting all three parameters (url, num, country) adequately. The description mentions 'URL' which aligns with the required parameter, but adds no additional context about parameter interactions or format constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get keywords) and scope (that a URL ranks for), distinguishing it from sibling tools like get_domain_keywords by specifying URL-level analysis. However, it assumes the agent understands 'ranks for' implies search engine rankings without specifying the data source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like get_domain_keywords or get_keyword_data. No mention of prerequisites (e.g., valid URL format) or when not to use it. The agent must infer applicability from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_url_trafficCInspect

Get traffic metrics for a URL

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to analyze
countryNoCountry code (empty string for Global, 'us' for United States, etc.)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. While 'Get' implies read-only, the description fails to specify what traffic metrics are returned (visits, pageviews, unique visitors), time range covered, or data freshness. Omits error handling for invalid URLs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 6 words with no filler. However, the brevity crosses into under-specification given the lack of output schema and complex sibling ecosystem. Efficient structure but insufficient content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for the tool's context. With siblings offering domain-level, keyword-level, and backlink analysis, the description should clarify URL-specific scope and hint at return value structure (since no output schema exists). Currently insufficient for confident tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('URL to analyze' and country code details), so the description does not need to compensate. The description adds no parameter-specific guidance, meeting the baseline for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the basic action (get traffic metrics) and target (URL), but fails to differentiate from sibling tool `get_domain_traffic` or clarify URL-specific scope. Slightly better than tautology but lacks specificity expected in this tool ecosystem.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like `get_domain_traffic` or `get_url_keywords`. No mention of prerequisites, rate limits, or data availability constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ideogram_v3CInspect

Ideogram v3 Quality — AI image generation with best-in-class text rendering

ParametersJSON Schema
NameRequiredDescriptionDefault
maskNoA black and white image. Black pixels are inpainted, white pixels are preserved. The mask will be resized to match the image size.
seedNoRandom seed. Set for reproducible generation
imageNoAn image file to use for inpainting. You must also use a mask.
promptYesText prompt for image generation
resolutionNoResolution. Overrides aspect ratio. Ignored if an inpainting image is given.None
style_typeNoThe styles help define the specific aesthetic of the image you want to generate.None
aspect_ratioNoAspect ratio. Ignored if a resolution or inpainting image is given.1:1
style_presetNoApply a predefined artistic style to the generated image (V3 models only).None
magic_prompt_optionNoMagic Prompt will interpret your prompt and optimize it to maximize variety and quality of the images generated. You can also use it to write prompts in different languages.Auto
style_reference_imagesNoA list of images to use as style references.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but only mentions quality characteristics ('best-in-class text rendering'). It fails to disclose operational traits: output format, rate limits, cost implications, latency, or whether generated images are persisted vs transient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single-sentence format with no wasted words. Key information ('AI image generation') is front-loaded. However, for a 10-parameter tool with complex capabilities, this brevity may be insufficient rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex image generation tool supporting inpainting, style references, and resolution controls, the description is inadequate. It omits major capabilities visible in the schema (inpainting, style_reference_images) and provides no output format guidance despite no output schema being present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema adequately documents all 10 parameters (e.g., mask logic, seed purpose). The description adds no parameter-specific guidance, meeting the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the core function ('AI image generation') and distinguishes it from siblings (all text/data scraping tools). The 'best-in-class text rendering' phrase effectively differentiates this image tool. However, it doesn't clarify that the tool supports both generation and inpainting modes evident in the schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives (though none exist in siblings), nor when to use specific features like inpainting (mask+image) versus standard generation. No mention of prerequisites like image URLs for inpainting mode.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list-archetypesAInspect

Get a list of all available voice archetypes with their names and descriptions

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates what data is returned ('names and descriptions'), but omits other behavioral traits like pagination, rate limiting, or caching behavior that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single efficient sentence that front-loads the action ('Get a list') and immediately specifies the scope and return content. There is no redundant or wasted language; every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (no input parameters) and lack of output schema, the description adequately compensates by specifying the structure of returned items ('names and descriptions'). For a simple catalog listing tool, this provides sufficient context for invocation, though mentioning pagination would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters and 100% schema description coverage of those zero parameters. Per the baseline rules for parameterless tools, this earns a default score of 4, as there are no parameter semantics to clarify beyond what the empty schema already communicates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get a list' and resource 'voice archetypes' to clearly define the tool's function. It implicitly distinguishes from sibling 'get-archetype' by emphasizing 'all available' (bulk retrieval vs. singular), though it doesn't explicitly name the alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (retrieving the complete catalog of archetypes) through the phrase 'all available', but lacks explicit when-to-use guidance or comparison to the singular 'get-archetype' alternative. Users must infer when to prefer this over the specific getter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list-copywriting-frameworksBInspect

Get a list of available copywriting frameworks and their descriptions for a specific social media network

ParametersJSON Schema
NameRequiredDescriptionDefault
networkYesSocial media network (twitter, instagram, linkedin, tiktok, youtube, facebook)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully clarifies that the tool returns both framework names and their descriptions (not just identifiers), but lacks other behavioral details such as pagination behavior, rate limits, or caching characteristics typical for a read-only list operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of 14 words with no redundancy. It is front-loaded with the action ('Get a list') and immediately qualifies the resource and scope, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, clear list operation) and lack of output schema, the description is reasonably complete. It clarifies that 'descriptions' are included in the return payload, which helps the agent understand the data richness. A minor gap remains regarding the return structure (array vs object) or quantity limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the 'network' parameter is fully documented with valid values). The description references 'specific social media network' which aligns with the parameter, but adds no additional semantic context (formatting rules, case sensitivity, default behavior) beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves copywriting frameworks with their descriptions for a specific network, using specific verbs ('Get a list'). However, it does not explicitly differentiate from the sibling tool 'get-copywriting-framework' (singular), which likely retrieves a specific framework versus this listing operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get-copywriting-framework' or other content generation tools. There are no stated prerequisites, exclusions, or workflow guidance (e.g., 'use this to browse frameworks before selecting one').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_ad_detailsDInspect

Ad Details

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe url of the ad
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description discloses nothing about behavioral traits: whether this performs web scraping, API calls, or database lookups; what data structure it returns; whether there are rate limits or authentication requirements; or if there are side effects. The agent has zero behavioral context beyond the tool name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While only two words long, this represents under-specification rather than efficient conciseness. As with the calibration example 'Process', brevity here indicates missing information rather than disciplined editing. Every sentence should earn its place, but this description barely constitutes a sentence and fails to earn any informational value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the sibling ecosystem (multiple ad-related tools with overlapping names) and the lack of output schema, the description should explicitly clarify scope and return value. It fails completely to address how this tool differs from 'scrapecreators_ad_details_get' or what specific ad attributes are retrieved.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the single 'url' parameter is documented as 'The url of the ad'), the description is not required to compensate. The description adds no parameter-specific context, but the schema adequately defines the input, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Ad Details' is essentially a tautology of the tool name (scrapecreators_ad_details), merely removing the prefix and underscores. While it confirms the domain involves advertisements, it fails to specify what 'details' entails, what action is performed (scraping, fetching, analyzing), or how it differs from siblings like 'scrapecreators_get_ad' or 'scrapecreators_ad_details_get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided for when to use this tool versus the numerous alternatives (scrapecreators_get_ad, scrapecreators_ad_details_get, scrapecreators_search_ads, etc.). Given the high similarity between sibling names, the absence of selection criteria creates significant ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_ad_details_getDInspect

Ad Details

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe url of the ad
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not specify supported ad platforms, what data is returned, rate limits, or authentication requirements, leaving the agent blind to operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the two-word description constitutes underspecification rather than effective conciseness. It fails to earn its place by providing actionable information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completely inadequate for a data retrieval tool with no output schema. The description omits what ad details are returned, how to obtain valid URLs, expected response formats, and error conditions, providing no operational context for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its single 'url' parameter, establishing baseline sufficiency. The description adds no semantic value beyond the schema, but does not need to given the complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Ad Details' is tautological, merely restating the tool name without adding specificity. It fails to distinguish this tool from similar siblings like 'scrapecreators_ad_details' or 'scrapecreators_get_ad', leaving the agent uncertain about which to select for specific retrieval scenarios.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided regarding when to use this tool versus alternatives, nor are prerequisites mentioned. Given the crowded namespace with multiple ad-related tools, the absence of selection criteria forces the agent to guess based on naming conventions alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_age_and_genderDInspect

Age and Gender

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to users social profile
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description fails to disclose behavioral traits such as data source, scraping methodology, privacy implications, rate limits, or what happens when data is unavailable. The term 'scrape' in the name implies external data fetching, but the description does not confirm this or explain failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the two-word description is brief, it represents under-specification rather than efficient communication, failing to earn its place by providing actionable guidance. The extreme brevity leaves critical gaps in understanding rather than delivering high signal density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations, output schema, and the complex domain of social media scraping, the description is grossly incomplete. It omits expected return value structure, data availability limitations, and supported platform coverage necessary for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its single `url` parameter, which is adequately documented as 'URL to users social profile'. The description adds no additional semantic context, meeting the baseline expectation when the schema is self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Age and Gender' is a static noun phrase that restates the concept embedded in the tool name without specifying the action (scraping, retrieving, inferring) or target resource. It fails to differentiate from siblings like `scrapecreators_basic_profile` or `scrapecreators_users_audience_demographics` which may also return demographic data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as `scrapecreators_basic_profile` or `scrapecreators_users_audience_demographics`. There are no stated prerequisites, constraints, or conditions for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_amazon_shopCInspect

Amazon Shop

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to Amazon Shop page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears the full burden of behavioral disclosure, yet it reveals nothing about side effects, rate limits, return format, or authentication requirements. The agent cannot determine if this is a safe read-only operation or what data structure to expect from the scrape.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While extremely brief at only two words, the description represents under-specification rather than efficient conciseness. It fails to front-load any actionable information about the tool's functionality, expected return values, or usage constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter, the tool description lacks any explanation of what data is extracted from Amazon Shop pages or how it differs from product-specific siblings. Without an output schema, the description should characterize the expected return values, which it completely omits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema achieves 100% description coverage for its single 'url' parameter, documenting that it expects a URL to an Amazon Shop page. The description adds no additional semantic context (such as URL format examples or validation rules), but the complete schema coverage meets the baseline requirement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Amazon Shop' merely restates the tool name (minus the 'scrapecreators_' prefix) without specifying what action is performed (scraping, retrieving, validating) or what data is returned. It fails to distinguish this tool from siblings like 'scrapecreators_tiktok_shop' or 'scrapecreators_shop_products'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'scrapecreators_product_details' or 'scrapecreators_shop_products'. No prerequisites, exclusions, or selection criteria are mentioned to aid the agent in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_basic_profileDInspect

Basic Profile

ParametersJSON Schema
NameRequiredDescriptionDefault
userIdNoInstagram user id
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries none of the behavioral burden. It does not disclose whether the operation is read-only, if there are rate limits, caching behavior, or what data fields are returned (e.g., bio, follower count).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the description is inappropriately sized—two words cannot adequately describe a tool among 80+ siblings. This is under-specification rather than efficient conciseness; no sentences exist to earn their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich ecosystem of sibling scraping tools and lack of output schema or annotations, the description is completely inadequate. It fails to define the scope of 'basic' profile data or distinguish this tool's utility within the complex scrapecreators suite.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (userId: 'Instagram user id'), establishing a baseline of 3. The description itself adds no semantic value about the parameter, but does not need to compensate given the complete schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Basic Profile' is essentially a tautology of the tool name (scrapecreators_basic_profile) converted to title case. It fails to specify the action (retrieve/fetch/scrape), the platform (Instagram), or what 'basic' entails compared to other profile tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus sibling alternatives like scrapecreators_profile_posts, scrapecreators_profile_photos, or scrapecreators_instagram. The description does not indicate prerequisites or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_blueskyDInspect

Bluesky

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesBluesky handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits, yet it states nothing about whether the operation is read-only or destructive, what data structure is returned, or any rate limiting considerations. The agent cannot determine if this retrieves public profiles, requires authentication, or performs writes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the single-word description represents under-specification rather than efficient conciseness, as it fails to front-load any actionable information about the tool's function. Every sentence must earn its place by conveying essential operational context, which this placeholder word does not accomplish.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple single-parameter structure but lack of output schema or annotations, the description should explain what Bluesky data is retrieved (profile info, posts, metrics), but it provides no such context. This leaves critical gaps in the agent's understanding of the tool's utility and return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema provides 100% coverage with a single required parameter 'handle' described as 'Bluesky handle', so the description does not need to compensate for schema gaps per the baseline rules. However, the description adds no semantic context beyond the schema, such as expected handle format (with or without @) or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description consists solely of the word 'Bluesky', which merely restates the platform component of the tool name without specifying the action (scrape, retrieve, search) or the resource (profile, posts, etc.). It fails to distinguish this tool from siblings like scrapecreators_twitter or scrapecreators_instagram, leaving the agent uncertain about what data it actually retrieves from Bluesky.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to select this tool versus alternative social media scraping tools in the scrapecreators_* family. It lacks any mention of prerequisites, required permissions, or specific use cases where this tool is preferred over other platform-specific scrapers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_boardDInspect

Board

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the board to get
trimNoSet to true for a trimmed down version of the response
cursorNoThe cursor to get the next page of results
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this is a read-only operation, what data is returned, rate limits, or pagination behavior beyond the bare existence of a cursor parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the single word is concise, it is inappropriately sized for the tool's complexity and represents under-specification rather than efficient communication. It lacks sentence structure and fails to front-load any actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters including pagination (cursor) and a sibling namespace suggesting Pinterest functionality, the description is completely inadequate. With no output schema provided, the description should explain return values but provides no context whatsoever.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (url, trim, cursor are all documented), establishing a baseline score of 3. The description 'Board' adds no additional semantic context, examples, or format guidance beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Board' is a tautology that restates the resource type without a verb or action. It fails to specify what the tool does (scrapes? retrieves? lists?) and does not distinguish this tool from the sibling 'scrapecreators_user_boards'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'scrapecreators_user_boards' or 'scrapecreators_pin'. There are no prerequisites, constraints, or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_channel_shortsDInspect

Channel Shorts

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort by newest or popular
handleNoCan pass channelId or handle
channelIdNoCan pass channelId or handle
continuationTokenNoContinuation token to get more videos. Get 'continuationToken' from previous response.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure, yet it states nothing about read-only status, rate limiting, data freshness, or the specific platform (YouTube implied by 'Shorts' but not explicit). The pagination behavior implied by continuationToken is undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (two words), this represents under-specification rather than efficient conciseness. The description is front-loaded with the resource type but lacks the necessary explanatory content to be useful, failing the 'every sentence should earn its place' standard.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter tool with no output schema and no annotations, the description is completely inadequate. It omits the platform (YouTube), return value structure, authentication requirements, and differentiation from similar scraping tools in the extensive sibling list.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameters are adequately documented in the schema itself (sort options, handle/channelId interchangeability, continuation token usage). The description adds no parameter semantics, but the high schema coverage establishes a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Channel Shorts' is a tautology that restates the tool name without the 'scrapecreators_' prefix. It fails to specify the action (scrape? retrieve? list?) or distinguish from sibling tools like 'scrapecreators_channel_videos' or 'scrapecreators_trending_shorts'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives (e.g., when to use handle vs channelId, or when to prefer this over the general channel_videos tool). No mention of pagination workflow despite the continuationToken parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_channel_videosDInspect

Channel Videos

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort by latest or popular
handleNoYouTube channel handle
channelIdNoYouTube channel ID
includeExtrasNoThis will get you the like + comment count and the description. To get the full details of the video, use the /v1/youtube/video endpoint. This will slow down the response slightly.
continuationTokenNoContinuation token to get more videos. Get 'continuationToken' from previous response.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but discloses nothing about read-only status, rate limits, response time implications (mentioned only in schema), or what data structure is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two words is insufficient for a tool with 5 parameters and pagination logic. It lacks a front-loaded verb and fails to earn its place as a useful specification.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters (none required, implying complex combinations), pagination support, and no output schema, the description is completely inadequate. No guidance on parameter combinations or return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting all 5 parameters including sort options and continuation tokens. The description adds no parameter context, meeting the baseline for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Channel Videos' is a noun phrase that restates the resource without specifying the action (scrape, fetch, list) or platform (YouTube). It fails to distinguish from siblings like scrapecreators_channel_shorts or scrapecreators_youtube.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, or how to choose between the handle and channelId parameters (both optional). No mention of pagination workflow despite continuationToken parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_clipDInspect

Clip

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTwitch clip URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits, yet 'Clip' reveals nothing about whether this is a read-only operation, what data it returns, or any side effects. It fails to specify that it interacts with Twitch clips despite the parameter schema indicating this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the description is not verbose, it suffers from under-specification rather than genuine conciseness. A single word is insufficient to convey the tool's purpose, making it inappropriately sized for the information required.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Even with only one parameter and high schema coverage, the description fails to explain what the tool returns or accomplishes. Without an output schema or annotations, the description needed to compensate but provided no behavioral context whatsoever.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage (the 'url' parameter is documented as 'Twitch clip URL'), establishing a baseline of 3. The description 'Clip' adds no additional parameter context, syntax details, or usage examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is simply 'Clip', which is a tautology that restates the tool name (scrapecreators_clip) without specifying what the tool actually does (e.g., retrieve metadata, download content, or extract data from Twitch clips).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling tools (e.g., scrapecreators_twitch, scrapecreators_video_info, scrapecreators_transcript). There is no mention of prerequisites or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_commentsDInspect

Comments

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the post or reel to get comments from
cursorNoThe cursor to get more comments. Get 'cursor' from previous response.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of disclosure, yet it states nothing about read-only vs destructive behavior, rate limits, authentication requirements, or pagination behavior (though the cursor parameter implies pagination).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the single-word description represents under-specification rather than effective conciseness. No useful information is conveyed in the limited space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of similarly-named sibling tools and 100% schema coverage, the description should differentiate this tool's specific function and expected return format, but provides neither.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear descriptions for both 'url' and 'cursor' parameters. The description adds no semantic value beyond the schema, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Comments' is a tautology that restates part of the tool name without specifying the action (retrieve, post, analyze?), target platform, or scope. It fails to distinguish from siblings like 'scrapecreators_comments_get' or 'scrapecreators_post_comments'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives (e.g., scrapecreators_comments_get, scrapecreators_post_comments), prerequisites, or platform limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_comments_getDInspect

Comments

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesYouTube video URL
orderNoOrder of comments
continuationTokenNoContinuation token to get more comments. Get 'continuationToken' from previous response.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate this is a read operation, fails to explain pagination behavior (despite the continuationToken parameter), and omits any mention of rate limits, data limits, or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the single-word description is brief, this represents under-specification rather than effective conciseness. The word 'Comments' fails to earn its place by providing actionable information, forcing agents to rely entirely on the tool name and schema inference.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, pagination support, and no output schema or annotations, the description is completely inadequate. It lacks essential context about the data source (YouTube), return format, and relationship to sibling comment-related tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The schema adequately documents the url and continuationToken parameters. The description adds no additional semantic value beyond what the schema already provides, nor does it clarify the acceptable values for the 'order' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Comments' is a tautology that restates the tool's subject matter without specifying the action. It fails to indicate whether the tool retrieves, creates, or analyzes comments, and does not distinguish this getter variant from sibling tools like 'scrapecreators_comments' or 'scrapecreators_post_comments'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Given the existence of similarly named siblings (scrapecreators_comments, scrapecreators_post_comments), the lack of differentiation or prerequisites makes selection impossible based on the description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_communityDInspect

Community

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesCommunity URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it states nothing about whether this is a read/write operation, what data structure it returns, rate limits, or authentication requirements. The agent has no information about side effects or resource costs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single word, which constitutes under-specification rather than effective conciseness. It lacks any sentences that could earn their place through information delivery, failing to meet the standard for appropriately sized descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's singular purpose (fetching data about a community) and the existence of related sibling tools, the description provides insufficient context to distinguish its specific utility. The absence of an output schema and annotations further amplifies the description's failure to communicate what the tool actually retrieves.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the description adds no parameter-specific context, the input schema has 100% description coverage (the single 'url' parameter is documented as 'Community URL'). Per the scoring guidelines, high schema coverage establishes a baseline of 3 even when the description lacks parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Community' is a tautology that restates the final segment of the tool name (scrapecreators_community). It fails to specify what action the tool performs (scrape? fetch? analyze?) or what 'community' refers to in this context (e.g., YouTube Community tab, Facebook Group, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus sibling alternatives like scrapecreators_community_post_details or scrapecreators_community_tweets. There are no prerequisites, exclusions, or workflow context mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_community_post_detailsDInspect

Community Post Details

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the YouTube community post to get
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, and the description discloses no behavioral traits. It does not indicate what data structure is returned, whether the operation is read-only, rate limits, or if the tool requires authentication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (3 words), the description is under-specified rather than efficiently concise. It front-loads no actionable information, fails to earn its place with useful context, and reads as a placeholder rather than a distilled summary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool without an output schema, the description is still insufficient. It fails to identify the target platform (YouTube), specify what constitutes 'details' (content, engagement metrics, media), or explain the expected return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the single 'url' parameter is fully documented as 'The URL of the YouTube community post to get'), the description is not required to compensate. However, the description adds no semantic value beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Community Post Details' is a tautology that restates the tool name without adding specificity. It omits the action verb (get/retrieve) and fails to distinguish whether this targets YouTube, Reddit, or other platforms despite the schema revealing it is for YouTube.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'scrapecreators_community' (likely a list operation) or 'scrapecreators_post'. No mention of prerequisites such as needing a valid YouTube community post URL versus a standard video URL.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_community_tweetsDInspect

Community Tweets

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesCommunity URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries zero behavioral context. It does not disclose whether this is read-only, what tweet data is returned (recent vs popular), pagination behavior, rate limits, or what format the 'Community URL' should take. For a data retrieval tool with no annotation coverage, this is a critical gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The two-word description suffers from under-specification rather than efficient conciseness. No information is front-loaded because no actionable information exists. Every sentence (or fragment) fails to earn its place by providing actionable guidance to the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter and no output schema, the description is inadequate. It fails to explain what constitutes a 'Community' in this context, what data is returned, or how this differs from the 60+ sibling scraping tools. Even for a simple tool, the description lacks minimum viable context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Community URL'), the baseline score applies. The description adds no parameter-specific semantics, but the schema sufficiently documents the single required parameter. No additional context about URL format or validation rules is provided in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Community Tweets' essentially restates the tool name (scrapecreators_community_tweets) without adding specificity. While it identifies the resource type (tweets) and context (community), it fails to distinguish from siblings like 'scrapecreators_community', 'scrapecreators_user_tweets', or clarify what 'Community' means (Twitter/X Communities feature vs general community content).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. Given the extensive list of sibling tools (including scrapecreators_twitter, scrapecreators_user_tweets, scrapecreators_community, scrapecreators_tweet_details), the absence of differentiating criteria forces the agent to guess which tool retrieves community-specific tweets versus user timelines or general search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_company_adsDInspect

Company Ads

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed response
cursorNoCursor for pagination
pageIdNoFacebook page ID
statusNoStatus filter
countryNoCountry filter
end_dateNoEnd date (YYYY-MM-DD)
languageNoLanguage filter
media_typeNoMedia type filter
start_dateNoStart date (YYYY-MM-DD)
companyNameNoCompany name
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description fails to indicate whether this is a read-only operation, what data source is queried, rate limits, pagination behavior (despite the cursor parameter), or the response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (two words), the description is inappropriately sized for a 10-parameter filtering tool. It represents under-specification rather than efficient conciseness, failing to front-load any actionable information about the tool's capabilities.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 10 filter parameters, no output schema, and no annotations, the description is completely inadequate. It lacks explanation of return values, pagination mechanics, or the relationship between companyName, pageId, and other filters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear descriptions for all 10 parameters (e.g., 'End date (YYYY-MM-DD)', 'Facebook page ID'). The description adds no semantic value beyond the schema, but the baseline score is 3 given the comprehensive schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Company Ads' is a tautology that merely restates the tool name (scrapecreators_company_ads) without specifying the action (retrieve, list, search) or the resource scope. It fails to distinguish this tool from siblings like scrapecreators_company_ads_get or scrapecreators_search_ads.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as scrapecreators_company_ads_get, scrapecreators_advertiser_search, or scrapecreators_facebook_ad_library. No prerequisites or conditions for use are mentioned despite having 10 optional filter parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_company_ads_getDInspect

Company Ads

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoThe topic to search for. If you search for 'political', you will also need to pass a 'region', like 'US' or 'AU'
cursorNoCursor to paginate through results
domainNoThe domain of the company
regionNoThe region to search for. Defaults to anywhere
end_dateNoEnd date to search for. Format: YYYY-MM-DD
start_dateNoStart date to search for. Format: YYYY-MM-DD
advertiser_idNoThe advertiser id of the company
get_ad_detailsNoSet to true to get the ad details. Will cost 25 credits.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to mention credit consumption (noted in schema parameter 'get_ad_details'), pagination behavior, rate limits, or what data is returned. The description adds zero behavioral context beyond the tool name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, this is under-specification rather than effective conciseness. The fragment 'Company Ads' does not earn its place because it communicates no actionable information about the tool's function or return values.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 8 parameters, no output schema, and no annotations, the description is completely inadequate. It omits expected return format, credit costs, the relationship between 'domain' and 'advertiser_id' parameters, and pagination behavior, leaving critical gaps in the agent's understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage across all 8 parameters, the schema itself documents the parameters adequately. The description adds no additional parameter semantics, but the baseline score of 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Company Ads' is a noun phrase that restates the tool name without a clear action verb. It does not specify whether the tool retrieves, searches, or analyzes ads, nor does it differentiate from the sibling tool 'scrapecreators_company_ads' (without _get suffix).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'scrapecreators_company_ads', 'scrapecreators_ad_details', or 'scrapecreators_search_ads'. There is no mention of prerequisites, credit costs (referenced in the schema), or filtering strategies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_company_pageDInspect

Company Page

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the LinkedIn company page to get
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but reveals nothing about whether this performs a live scrape or cached lookup, rate limits, authentication requirements, or what data structure is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, the description is informationally empty and fails to front-load critical context (the LinkedIn platform) that appears only in the parameter schema. It wastes the agent's attention without delivering value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simple single-parameter input and lack of output schema, minimal description is needed, but the omission of 'LinkedIn'—which is crucial for distinguishing this from other platform company pages—leaves a significant gap in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its single 'url' parameter, which explicitly identifies it as a 'LinkedIn company page' URL. The description adds no additional parameter context, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Company Page' is essentially a tautology that restates the tool name without specifying the action (scrape/retrieve) or the platform (LinkedIn). It fails to distinguish this tool from siblings like 'scrapecreators_company_ads' or 'scrapecreators_linkedin'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'scrapecreators_search_for_companies' or 'scrapecreators_linkedin'. There are no prerequisites, exclusions, or conditional usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_embed_htmlDInspect

Embed HTML

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesInstagram handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It fails to mention whether this performs a network request, what format the returned HTML takes, whether it includes scripts/iframes, caching behavior, or any side effects. The description adds zero behavioral context beyond the tool name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (2 words), this is severe under-specification rather than effective conciseness. The description is front-loaded with ambiguity and lacks the necessary detail to help an agent understand the tool's utility. Every sentence must earn its place, but this description fails to earn its existence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and belongs to a large family of similar scraping tools (80+ siblings), the description provides completely inadequate context. It does not explain what the HTML embedding is for, what content it retrieves, or how to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'handle' parameter is documented as 'Instagram handle'), so the baseline is 3. The description 'Embed HTML' adds no additional context about the parameter syntax, validation rules, or whether the handle should include or exclude the '@' symbol.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Embed HTML' is a tautology that restates the action implied by the tool name without specifying what resource is being embedded, what platform it targets (despite the Instagram handle parameter), or what the output represents. It fails to distinguish this from sibling tools like scrapecreators_instagram or scrapecreators_profile_posts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives (e.g., scrapecreators_instagram, scrapecreators_basic_profile), nor any mention of prerequisites, rate limits, or authentication requirements despite being a data retrieval tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_facebookDInspect

Facebook

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFacebook profile URL
get_business_hoursNoSet to true to get business hours
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate what data is returned (profile info, posts, business hours), whether the operation is read-only, rate limits, or privacy considerations. The description offers zero behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While extremely brief (one word), this represents under-specification rather than efficient conciseness. The word 'Facebook' is redundant given the tool name and fails to earn its place by providing actionable information about functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a scraping tool with 2 parameters and no output schema or annotations, the description is woefully incomplete. It fails to explain what profile data is extracted, the format of results, error handling for invalid URLs, or how the optional business hours parameter affects output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with 'Facebook profile URL' and 'Set to true to get business hours' adequately documenting the parameters. The description 'Facebook' adds no semantic value beyond the schema, but baseline 3 is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Facebook' only identifies the target platform without stating what the tool actually does (scraping profile data). While not completely missing, it functions as a near-tautology given the tool name 'scrapecreators_facebook' and fails to specify the action or resource scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the many sibling scraping tools (scrapecreators_instagram, scrapecreators_linkedin, etc.) or other Facebook-related tools like scrapecreators_facebook_group_posts. No prerequisites, limitations, or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_facebook_ad_libraryDInspect

Facebook Ad Library

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoAd ID
urlNoAd URL
trimNoSet to true for a trimmed response
get_transcriptNoSet to true to get transcript
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description discloses no behavioral traits. It fails to state what data is returned (ad creative, metadata, spend data), whether the operation is read-only, rate limits, or authentication requirements. The full burden of behavioral disclosure is ignored.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the three-word description is not verbose, it represents under-specification rather than efficient conciseness. No information is front-loaded because no actionable information is present. The single 'sentence' fails to earn its place by providing zero utility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters targeting a specific external API (Facebook Ad Library), the description is inadequate. Without an output schema, the description should explain return values, yet it provides no context about what data structure or fields to expect from the scraped ad library content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage across all 4 parameters, the schema itself adequately documents inputs. The tool description adds no additional semantic context, syntax examples, or explanations of what constitutes a 'trimmed response' beyond the parameter descriptions. Baseline score applies since schema carries the full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Facebook Ad Library' is a tautology that restates the tool name without specifying the action (scrape, fetch, or search). It identifies the resource but lacks a verb to clarify what operation is performed. It fails to distinguish from siblings like 'scrapecreators_company_ads' or 'scrapecreators_advertiser_search'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. With numerous sibling scraping tools available (including 'scrapecreators_company_ads' and 'scrapecreators_get_ad'), the description offers no criteria for selecting this specific endpoint or prerequisites like requiring an Ad ID versus a company name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_facebook_group_postsDInspect

Facebook Group Posts

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoThe URL of the group
cursorNoThe cursor to paginate to the next page
sort_byNoHow to sort the posts
group_idNoThe ID of the group
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of disclosure, yet it mentions nothing about authentication requirements, rate limits, privacy implications of scraping, or whether the operation is read-only. It fails to describe the return format or what happens when accessing private groups.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the description is not verbose, it is inappropriately sized at only three words for a complex tool with four parameters and pagination functionality. The extreme brevity renders it insufficient rather than efficiently concise, failing to front-load any actionable guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity—Facebook group scraping with pagination and four input parameters—the description is fundamentally inadequate. With no output schema provided, the description should explain the return format, yet it offers no information about what post data is retrieved or how to handle errors.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its four parameters, establishing a baseline where the description need not repeat these definitions. However, the description adds no additional semantic context, such as whether 'url' and 'group_id' are mutually exclusive options or required alternatives.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Facebook Group Posts' is a tautology that restates the tool name without the 'scrapecreators_' prefix, failing to specify the action performed (e.g., scrape, retrieve, list). It does not differentiate this tool from siblings like 'scrapecreators_facebook' or 'scrapecreators_posts'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'scrapecreators_facebook'. It does not clarify the relationship between the 'url' and 'group_id' parameters, nor does it explain pagination workflows using the 'cursor' parameter despite having zero required parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_followersDInspect

Followers

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true to get a trimmed response
handleNoTikTok handle
user_idNoUser id. Use this for faster response times.
min_timeNoUsed to paginate. Get 'min_time' from previous response.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for disclosing behavioral traits. It fails to indicate this is a read-only operation, omits rate limit warnings, does not explain the trimmed vs. full response difference, and gives no hints about pagination or return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at a single word, this is under-specification rather than effective conciseness. The solitary word 'Followers' earns no informational value; it wastes the opportunity to front-load critical context about the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter tool with no output schema and no annotations, the description is grossly incomplete. It omits the TikTok platform context, pagination behavior, authentication requirements, and the mutual exclusivity logic between 'handle' and 'user_id' parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter definitions in the schema are complete (e.g., 'TikTok handle', 'Used to paginate'). The description adds no semantic value beyond the schema, but meets the baseline since the schema adequately documents all 4 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is simply 'Followers'—a tautology that restates part of the tool name without specifying the action (retrieve? list? scrape?), the target platform (TikTok, inferred only from schema), or scope. It fails to distinguish from sibling tool 'scrapecreators_following' (followers vs. following).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'scrapecreators_following' or 'scrapecreators_basic_profile'. Does not clarify when to use 'handle' versus 'user_id' for lookup, nor explain the pagination mechanism implied by 'min_time'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_followingDInspect

Following

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true to get a trimmed response
handleYesTikTok handle
min_timeNoUsed to paginate. Get 'min_time' from previous response.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full disclosure burden. It fails to mention read-only status, pagination behavior (despite the min_time parameter implying it), rate limits, or what data structure is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single word description represents under-specification rather than effective conciseness. No information is front-loaded because no operational details are provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of numerous sibling tools with similar names (followers, profile, search), the lack of output schema, and the mutation-implied name 'scrape', the description provides insufficient context to distinguish this tool's function or output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing documentation for handle, trim, and min_time. The description adds no parameter context, but the baseline score of 3 applies since the schema adequately documents all three parameters including the pagination semantics of min_time.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Following' is a tautology that merely restates the tool name without the prefix. It fails to specify whether this retrieves who a user follows (vs. scrapecreators_followers which retrieves their followers), what platform (TikTok per the schema), or what the output contains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like scrapecreators_followers, scrapecreators_basic_profile, or scrapecreators_search_users. No mention of prerequisites like needing a valid TikTok handle format.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_get_adDInspect

Get Ad

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesAd id
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a safe read operation, if there are rate limits, what format the data returns in, or what 'Ad' specifically refers to in this context. No auth requirements or side effects are mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, this represents under-specification rather than effective conciseness. The description fails to earn its place by providing actionable information beyond the function name, leaving the agent without necessary context to select this tool correctly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter and no output schema (which lowers the complexity bar), the description is inadequate given the rich ecosystem of sibling tools. It fails to clarify the tool's specific role in the scrapecreators suite or what distinguishes it from the multiple other ad-retrieval functions available.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'id' parameter is described as 'Ad id'), establishing a baseline score of 3. The description adds no additional context about the ID format, whether it is a platform-specific identifier, or how to obtain it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get Ad' is essentially a tautology of the tool name 'scrapecreators_get_ad', merely removing the prefix and adding a space. While it confirms the tool retrieves an ad, it fails to specify the platform (likely social media given the sibling tools), the data source, or distinguish from siblings like 'scrapecreators_ad_details' or 'scrapecreators_company_ads'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided for when to use this tool versus the numerous sibling ad-related functions (scrapecreators_ad_details, scrapecreators_company_ads, scrapecreators_search_ads, etc.). The agent cannot determine if this is for getting a single ad by ID versus searching or listing company ads.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_get_song_detailsCInspect

Get Song Details

ParametersJSON Schema
NameRequiredDescriptionDefault
clipIdYesThis is a little confusing because this isn't songId like you'd think. It is the clipId. I guess because you can clip different portions of a song 🤷‍♂️
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides almost nothing. It does not clarify if this is a read-only operation (though 'Get' implies it), what happens if the clipId is invalid, rate limits, or the response format. The schema's parameter description adds some transparency about the clipId/songId distinction, but the main description field itself adds no behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the three-word description is technically concise, it suffers from under-specification rather than efficient information delivery. Every sentence (here, just one) fails to earn its place by providing actionable context, making it wasteful in its brevity rather than appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what song details are returned (metadata, audio URL, usage statistics?), specify the platform context, and clarify the clipId concept. It provides none of these, leaving significant gaps in contextual completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the clipId parameter documented in the schema explaining the confusing naming convention. The description 'Get Song Details' adds no additional parameter semantics, but with high schema coverage, the baseline score of 3 is appropriate as the schema carries the load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get Song Details' is essentially a tautology that restates the tool name (scrapecreators_get_song_details) in title case. It fails to specify what song details are returned, what platform this targets (TikTok, Instagram, etc.), or how it differs from siblings like scrapecreators_get_popular_songs or scrapecreators_tiktoks_using_song.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as scrapecreators_reels_using_song or scrapecreators_get_popular_songs. There is no mention of prerequisites (e.g., how to obtain a clipId) or conditions where this tool should not be used.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_googleDInspect

Google

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
regionNo2 letter country code, ie US, UK, CA, etc This will show results from that country
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries the full burden of disclosing behavioral traits (read-only vs destructive, authentication requirements, return format, rate limits). It provides zero behavioral context beyond the platform name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, 'Google' is inappropriately sized—this is under-specification rather than efficient conciseness. It fails to front-load any actionable information, consisting of a single noun that assumes the agent already understands the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of scraping Google data (potential for various result types, SERP features, knowledge panels) and the absence of an output schema or annotations, the description is completely inadequate. It fails to explain what data structure is returned or what specific Google content is retrieved.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (query and region are well-documented in the schema itself). The description adds no parameter details, but with complete schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Google' is essentially a tautology that restates the platform name from the tool name (scrapecreators_google) without explaining the action (scrape/search), the specific resource being accessed, or how it differs from sibling search tools like tavily_search or web_search_exa.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives (tavily_search, web_search_exa, scrapecreators_search), nor any prerequisites, rate limits, or best practices for the region parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_highlights_detailsDInspect

Highlights Details

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoThe ID of the highlight to get details for
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description discloses no behavioral traits. It does not indicate whether this is a read-only operation, what data source is queried, rate limits, authentication requirements, or what format the returned highlight details take.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at two words, this represents under-specification rather than efficient conciseness. Every sentence should earn its place, but this description fails to provide the essential context expected for a tool in a complex social-media scraping ecosystem.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool exists within a large suite of social media scraping functions (100+ siblings), the description is inadequate. It lacks platform specification, output structure details, and relationship context to similar tools like 'scrapecreators_story_highlights', leaving critical gaps for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'id' parameter is documented as 'The ID of the highlight to get details for'), the schema carries the full semantic load. The description adds no additional parameter context (such as ID format or source), warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Highlights Details' is a tautology that restates the tool name ('scrapecreators_highlights_details') without adding specificity. It fails to specify what platform's highlights (Instagram, TikTok, etc.) or what 'details' encompasses, and does not distinguish from sibling tools like 'scrapecreators_story_highlights'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given the presence of siblings like 'scrapecreators_story_highlights' and 'scrapecreators_profile_posts', the absence of differentiation criteria leaves agents unable to select the correct tool for retrieving highlight information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_instagramDInspect

Instagram

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed response
handleYesInstagram handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fails to disclose any behavioral traits such as whether the operation is read-only or destructive, rate limiting concerns, or the structure of returned data. The term 'scrape' in the tool name hints at data retrieval, but the description itself provides no confirmation of safety profiles or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single word that is severely under-specified rather than appropriately concise; it fails to meet the minimum viable length for a tool with multiple parameters and no output schema. While it contains no wasted words, the extreme brevity renders it functionally useless for decision-making.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of two parameters, lack of output schema, absence of annotations, and the tool's position among dozens of similar scraping utilities, the single-word description is completely inadequate. The agent lacks necessary context regarding what specific Instagram data is retrieved (profile, posts, metadata) and how this tool differs from specialized siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% description coverage for its two parameters ('Instagram handle' and trim option), the description adds no semantic value beyond what the schema already provides. The baseline score applies since the schema adequately documents parameters without requiring additional descriptive context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description consists solely of the word 'Instagram,' which fails to specify what action the tool performs or what resource it retrieves. It provides no verb indicating functionality (e.g., retrieve, scrape, search) and does not distinguish this tool from the numerous sibling scraper tools like `scrapecreators_basic_profile` or platform-specific variants.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus the many available alternatives, including other Instagram-specific tools like `scrapecreators_profile_posts` or cross-platform options. There are no stated prerequisites, conditions, or exclusion criteria to inform agent selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_kickDInspect

Kick

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesKick clip URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits but fails completely. It does not indicate what data is returned (metadata, transcripts, media files), whether the operation is read-only or destructive, or if there are rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While extremely brief (one word), this represents under-specification rather than effective conciseness. The single word fails to earn its place by providing actionable information about tool functionality, resulting in an inappropriately sized description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description is completely inadequate even for a single-parameter tool. It does not explain what information is extracted from the Kick clip URL, the structure of returned data, or any error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage for its single 'url' parameter ('Kick clip URL'), establishing a baseline score of 3. The description 'Kick' adds no additional semantic context, examples, or formatting guidance beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Kick' merely identifies the target platform, functioning as a tautology that restates the platform suffix from the tool name 'scrapecreators_kick'. It fails to specify what action the tool performs (e.g., scraping clip metadata, downloading content) or distinguish its purpose from sibling tools like 'scrapecreators_twitch' or 'scrapecreators_youtube'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided regarding when to use this tool versus alternatives, what prerequisites are needed (e.g., valid Kick clip URL format), or specific use cases. The description offers zero contextual clues for proper tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_komiDInspect

Komi

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to Komi page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but completely fails to indicate whether this is a read-only operation, what data is scraped, rate limits, or authentication requirements. The single word reveals nothing about the tool's behavior beyond what the name implies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the description consists of only a single word, this represents under-specification rather than effective conciseness. No information is front-loaded because no actionable information is present; the description wastes the opportunity to orient the user.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a web scraping tool with no output schema and no annotations, the description is completely inadequate. It fails to explain what data is returned, what 'Komi' refers to (a specific creator platform), or how the scraped data is structured, leaving critical gaps in the agent's understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'url' parameter is documented as 'URL to Komi page'), establishing a baseline score of 3. The description 'Komi' adds no additional semantic value regarding parameter usage, formats, or examples beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Komi' is a tautology that merely restates the platform name from the tool identifier without explaining what the tool actually does (presumably scraping data from Komi creator pages). It fails to provide a specific verb or resource description to distinguish its function from the 80+ sibling scraping tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous alternative platform-specific scrapers (scrapecreators_instagram, scrapecreators_tiktok, etc.). There are no prerequisites, conditions, or exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_linkbioDInspect

Linkbio

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to Linkbio (lnk.bio) page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it provides zero information about side effects, return data, rate limits, or authentication requirements. The agent cannot determine if this is a read operation or what data structure it returns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (single word), this represents under-specification rather than efficient conciseness. The description contains no actionable information for an agent, failing the standard that 'every sentence should earn its place.'

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema and no annotations, the description provides no context about what data is retrieved from Linkbio pages or how the results are structured. It is completely inadequate for an agent to understand the tool's utility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'url' parameter is documented as 'URL to Linkbio (lnk.bio) page'). Since the schema fully documents the single parameter, the description does not need to compensate, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Linkbio' is a tautology that restates the tool name (scrapecreators_linkbio) without explaining what the tool actually does. It lacks a specific verb (e.g., 'scrape,' 'extract,' 'retrieve') and fails to distinguish this tool from similar siblings like scrapecreators_linktree or scrapecreators_linkme.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Given the presence of similar tools (scrapecreators_linktree, scrapecreators_linkme), the description should specify that this targets lnk.bio specifically, but it provides no such differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_linkedinDInspect

LinkedIn

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesLinkedIn profile URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it fails to mention that this performs web scraping, what data fields are returned, rate limits, authentication requirements, or whether the operation is read-only vs. potentially restricted by LinkedIn's robots.txt.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the single word is not verbose, it represents under-specification rather than earned brevity. As with the calibration example 'Process', minimal length without information density warrants a low score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter with complete schema coverage, the tool performs an external scraping operation where behavior matters significantly. The absence of any mention of output format, data extracted (e.g., name, title, company), or error conditions (private profiles, invalid URLs) leaves critical gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'url' parameter is documented as 'LinkedIn profile URL'), establishing a baseline score of 3. The description adds no additional parameter semantics, but does not detract from the schema's clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'LinkedIn' is tautological, merely restating the platform suffix from the tool name 'scrapecreators_linkedin'. It fails to specify the action (scrape/fetch), the target resource (profiles, posts, company pages), or the scope, leaving the agent to guess the tool's actual function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'scrapecreators_basic_profile' or 'scrapecreators_company_page', or prerequisites such as requiring a public profile URL. The description is completely silent on usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_linkedin_ad_libraryCInspect

LinkedIn Ad Library

ParametersJSON Schema
NameRequiredDescriptionDefault
companyNoCompany name
endDateNoEnd date
keywordNoKeyword to search
countriesNoCountries filter
startDateNoStart date
paginationTokenNoPagination token
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, if it requires authentication, rate limits, or what data is returned. The description adds no behavioral context beyond the tool name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three words long, representing under-specification rather than effective conciseness. There are no sentences to evaluate for structure or information density; the brevity reflects incompleteness rather than efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters, no output schema, no annotations, and a complex ecosystem of sibling tools, the description is woefully inadequate. It provides no information about return values, pagination behavior, date formats, or how to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage across all 6 parameters (company, keyword, date ranges, etc.), the schema adequately documents inputs. The description adds no additional semantic context, but baseline 3 is appropriate since the schema carries the full load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'LinkedIn Ad Library' is a tautology that restates the tool name without adding a verb or action. It fails to specify whether the tool searches, retrieves, or lists ads, and does not distinguish from siblings like 'scrapecreators_facebook_ad_library' or 'scrapecreators_company_ads'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as 'scrapecreators_advertiser_search' or 'scrapecreators_company_ads'. There is no mention of prerequisites, required parameter combinations (despite 0 required fields in schema), or usage patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_linkmeDInspect

Linkme

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesLinkme profile URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure but provides zero information about side effects, rate limits, authentication requirements, or return data structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While not verbose, the single-word description represents under-specification rather than efficient conciseness. No sentences exist to earn their place because content is entirely absent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one well-documented parameter, the tool lacks any explanation of what data it returns or its behavioral characteristics. Given the lack of output schema and annotations, the description fails to provide necessary context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with the 'url' parameter described as 'Linkme profile URL'. The description 'Linkme' adds no additional semantic meaning, but the high schema coverage establishes a baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Linkme' is a tautology that restates the tool name fragment without explaining what the tool actually does (e.g., scrape, validate, or create). It fails to specify the verb or resource action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'scrapecreators_linkbio' or 'scrapecreators_linktree', or prerequisites for the URL parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_linktreeDInspect

Linktree

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to Linktree page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It fails to indicate what data is returned (links, profile metadata, analytics), whether the operation is read-only, or any rate limiting considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the single word is brief, this represents under-specification rather than effective conciseness. The description fails to front-load critical information about the tool's function, offering only a label where a sentence is needed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and the tool's likely purpose (scraping Linktree profile data), the description is incomplete. It does not indicate what information is extracted from the provided URL, leaving the agent uncertain about the tool's utility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'url' parameter is documented as 'URL to Linktree page'), the schema itself adequately defines the input. The description adds no additional parameter context, meeting the baseline score for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Linktree' is tautological, merely extracting the platform name from the tool name 'scrapecreators_linktree' without specifying what action is performed (e.g., scrape, fetch, parse). It fails to distinguish from sibling tools like 'scrapecreators_linkbio' or 'scrapecreators_instagram' beyond the platform name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides no guidance on when to use this tool versus alternatives (e.g., 'scrapecreators_linkbio' which also handles bio link pages), nor any prerequisites or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_pillarDInspect

Pillar

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to Pillar page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, what data is returned from the Pillar page, or any authentication/rate limiting requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While only one word, this represents under-specification rather than effective conciseness. The single word fails to earn its place by providing actionable context to the agent, similar to the 'Process' example in the calibration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter, the description is inadequate. With no output schema, no annotations, and no explanation of what 'Pillar' content is or what the tool returns, the agent lacks sufficient context to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the single 'url' parameter is fully documented as 'URL to Pillar page' within the schema itself. The description adds no semantic meaning beyond the schema, but the baseline score of 3 applies given the complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Pillar' merely restates the resource name from the tool name (scrapecreators_pillar), constituting a tautology. It fails to specify what action the tool performs (e.g., scrape, analyze, fetch) or what a 'Pillar' represents in this context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the many sibling scraping tools (e.g., scrapecreators_profile_posts, scrapecreators_channel_videos). No prerequisites, exclusions, or alternative suggestions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_pinDInspect

Pin

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesPinterest pin URL
trimNoSet to true for a trimmed down version of the response
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it states nothing about side effects, rate limits, authentication requirements, or what data structure is returned. The agent cannot determine if this is a safe read operation or what the 'trim' parameter affects beyond the schema hint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-word description 'Pin' reflects under-specification rather than efficient conciseness. While brief, it fails the 'every sentence should earn its place' standard because it provides no actionable information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 100% schema coverage for inputs, the tool lacks an output schema and any description of return values or error conditions. For a 2-parameter scraping tool with no annotations, the description should explain what data fields are returned or link to expected output structure, which it does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('Pinterest pin URL' for url, and explicit boolean behavior for trim). Since the schema fully documents both parameters, the baseline score is 3. The description adds no additional parameter context, but the schema compensates adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Pin' is a tautology that merely restates the tool name (scrapecreators_pin) without explaining what the tool actually does (e.g., scrape Pinterest pin data). While it hints at the resource type, it fails to specify the action or distinguish this from sibling tools like scrapecreators_board.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides absolutely no guidance on when to use this tool versus alternatives. Given the extensive list of sibling tools (scrapecreators_board, scrapecreators_user_boards, scrapecreators_search, etc.), there is no indication of when this specific Pinterest pin scraper should be preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_playlistDInspect

Playlist

ParametersJSON Schema
NameRequiredDescriptionDefault
playlist_idYesThe ID of the YouTube playlist. In the YouTube URL it will be the 'list' parameter.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this is read-only, what data is returned (playlist metadata, video list, statistics), or any rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single word that conveys no actionable information beyond the tool name. This represents under-specification rather than efficient conciseness, as every sentence (or word) fails to earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema and no annotations, the description provides no indication of the return data structure, scope of the operation, or what specific playlist information is retrieved (videos, titles, thumbnails, view counts).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting that 'playlist_id' refers to the YouTube 'list' URL parameter. Since the schema fully documents the single parameter, the description meets the baseline requirement despite adding no additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is simply 'Playlist', which restates the noun from the tool name without specifying the action performed (e.g., retrieve, fetch, list videos). It fails to explain what the tool actually does with a playlist resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like 'scrapecreators_channel_videos', 'scrapecreators_youtube', or 'scrapecreators_search'. There are no prerequisites, filters, or contextual triggers mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_postDInspect

Post

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the LinkedIn post to get
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, what data structure is returned, or any side effects. The schema parameter description uses the word 'get', but the main description does not confirm this behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While extremely brief (one word), this constitutes under-specification rather than efficient conciseness. The single word fails the test that 'every sentence should earn its place' by conveying essentially zero actionable information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completely inadequate for a tool with no output schema and no annotations. With over 100 sibling tools including multiple variants for posts (scrapecreators_posts, scrapecreators_post_get, scrapecreators_linkedin), the description provides no context to clarify scope, prerequisites, or return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single 'url' parameter is documented as 'The URL of the LinkedIn post to get'). With high schema coverage, the baseline score is 3. The description adds no additional parameter semantics, but none are needed given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is the single word 'Post', which is a tautology restating the tool name fragment. It lacks a specific verb (e.g., 'retrieve', 'scrape'), fails to mention the LinkedIn context (only found in the schema), and does not distinguish from siblings like 'scrapecreators_posts' or 'scrapecreators_post_get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives. Given the existence of siblings like 'scrapecreators_linkedin', 'scrapecreators_posts', and 'scrapecreators_post_get', the description offers no criteria for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_post_commentsDInspect

Post Comments

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesReddit post URL
trimNoSet to true for a trimmed down version of the response
cursorNoCursor to get more comments, or replies.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries no behavioral disclosure. It fails to mention pagination behavior (despite the cursor parameter), rate limits, read-only nature, or what the response contains.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (two words), it is under-specified rather than efficiently concise. It fails to front-load critical context like the platform (Reddit) or action (retrieval), providing minimal value per word.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with pagination (cursor), optional response trimming, and likely complex return data (comments), the description is inadequate. It lacks return value documentation, error handling details, or scraping-specific behavioral notes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds no additional parameter context (e.g., URL format expectations, cursor string construction), but the schema adequately documents the three parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Post Comments' is a tautology that restates the tool name without the 'scrapecreators_' prefix. It is ambiguous whether this means 'retrieve comments from a post' or 'submit comments to a post', failing to clearly specify the tool's retrieval purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus siblings like 'scrapecreators_comments' or 'scrapecreators_reddit'. No prerequisites, exclusions, or alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_post_getDInspect

Post

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the post to get
get_commentsNoWhether you want to get the first several comments of the post
get_transcriptNoWhether you want to get the transcript of the post
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries the full burden of disclosure. It fails to indicate this is a read-only retrieval operation, mentions no rate limits, auth requirements, or what data structure is returned despite having no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the single word 'Post' represents under-specification rather than effective conciseness. It fails to front-load critical context and wastes the description field with content that adds no value beyond the tool name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Severely inadequate for a 3-parameter tool with many similarly-named siblings. Lacks explanation of what platform 'scrapecreators' refers to, what constitutes a 'post' in this context, and how the returned data is structured.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, documenting the URL target and boolean flags for comments/transcripts. The description adds no additional parameter context, but with high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Post' is a tautology that merely restates part of the tool name (scrapecreators_post_get) without specifying the action (retrieval) or the resource type (social media post). It fails to distinguish from siblings like 'scrapecreators_post' or 'scrapecreators_posts_get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus numerous siblings (e.g., scrapecreators_post, scrapecreators_post_comments, scrapecreators_transcript_get). No mention of prerequisites like requiring a specific post URL format.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_postreel_infoDInspect

Post/Reel Info

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesInstagram post or reel URL
trimNoSet to true for a trimmed response
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure but provides zero behavioral context. It does not indicate what data is returned (metadata, engagement stats, media URLs), whether scraping requires authentication, rate limits, or if the operation is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While extremely brief at three words, this represents under-specification rather than efficient conciseness. No sentences earn their place because no substantive information is conveyed; the text functions as a label rather than a functional description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (scraping Instagram content), the absence of annotations, output schema, and behavioral details makes the description completely inadequate. It fails to explain return values, error conditions, or how it handles different Instagram URL formats.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (both 'url' and 'trim' are documented), establishing a baseline of 3. However, the description adds no additional semantic context about parameter formats, validation rules, or the implications of the 'trim' option beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Post/Reel Info' is a tautology that restates the tool name without specifying what information is retrieved or the action performed. It fails to distinguish this tool from siblings like 'scrapecreators_post', 'scrapecreators_reels', or 'scrapecreators_video_info', leaving the agent uncertain about its specific scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus the numerous similar siblings (e.g., scrapecreators_post_get, scrapecreators_reels). The description lacks prerequisites, exclusion criteria, or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_postsDInspect

Posts

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed down version of the response
handleYesThreads username
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this is a read-only operation, what data is returned, pagination behavior, or potential rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (single word), this represents under-specification rather than effective conciseness. The lone word 'Posts' wastes the agent's context window by failing to front-load critical distinctions from sibling tools or platform specificity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool requires a specific platform handle (Threads) and competes with 10+ sibling tools with similar names, the description is woefully incomplete. No output schema exists to compensate, leaving the agent without necessary context to select this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% ('Threads username' for handle, and clear description for trim boolean). The description adds no semantic value beyond the schema, but baseline 3 is appropriate when schema documentation is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Posts' restates the tool name but provides almost no actionable information. It fails to specify the platform (Threads, inferred only from the schema's 'Threads username' parameter), the scope of posts retrieved, or how it differs from siblings like scrapecreators_threads, scrapecreators_profile_posts, or scrapecreators_posts_get.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling alternatives (e.g., scrapecreators_posts_get, scrapecreators_user_posts, scrapecreators_threads). No prerequisites, rate limits, or filtering capabilities are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_posts_getDInspect

Posts

ParametersJSON Schema
NameRequiredDescriptionDefault
handleNoBluesky handle
user_idNoBluesky 'did'. (For some reason Bluesky calls their user ids, 'did' for whatever reason)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure, yet it states nothing about read-only status, rate limits, pagination, or the Bluesky-specific context. The agent cannot determine if this is a safe read operation or what data volume to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (one word), this represents under-specification rather than efficient conciseness. The single word fails to front-load critical context (platform, action, scope) that would help an agent select this tool correctly from the large sibling set.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, and the dense ecosystem of similar scraping tools (90+ siblings), the description must actively disambiguate platform and function. It provides none of this necessary context, making it inadequate for correct agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter semantics are adequately handled by the schema itself ('Bluesky handle', 'Bluesky did'). The description adds no additional parameter context, but baseline 3 is appropriate when the schema documentation is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Posts' is essentially a tautology that restates the resource noun from the tool name without specifying the action (get/retrieve) or the platform. It fails to distinguish this Bluesky-specific tool from siblings like 'scrapecreators_posts' or 'scrapecreators_profile_posts', leaving the agent to infer the actual function from parameter names alone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus similar siblings (e.g., 'scrapecreators_posts', 'scrapecreators_post_get', or 'scrapecreators_bluesky'). No explanation for the mutual exclusivity or precedence of 'handle' vs 'user_id' parameters, both of which are optional (required: 0).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_product_detailsDInspect

Product Details

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the product to get details for.
regionNoRegion the proxy will be set to so you can access products from that country. Use 2 letter country codes like US, GB, FR, etc. For England, don't use UK, use GB.
get_related_videosNoWhether to get related videos for the product. These are affiliate videos promoting the product.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits such as data source, rate limits, caching behavior, or what format the product details are returned in. The description carries the full burden of transparency and provides nothing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, this represents under-specification rather than effective conciseness. The two-word description wastes the opportunity to convey operational context and fails to earn its place as useful documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with no annotations and no output schema, the description is completely inadequate. It does not explain what product details are returned, what domains/URLs are supported, or how the proxy region parameter affects results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameter meanings (URL, region proxy, related videos flag) are already fully documented in the schema. The description adds no additional parameter context, earning the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Product Details' is a tautology that restates the tool name (minus the 'scrapecreators_' prefix) without clarifying what specific product details are retrieved or how this differs from siblings like 'scrapecreators_product_reviews' or 'scrapecreators_shop_products'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as 'scrapecreators_product_reviews' for review data or 'scrapecreators_shop_products' for product listings. No prerequisites or constraints are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_product_reviewsDInspect

Product Reviews

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoThe URL of the product (required if product_id is not provided)
pageNoThe page number of the reviews
product_idNoThe ID of the product (required if url is not provided)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this is a read-only operation, what platform(s) it scrapes, pagination limits, rate limiting, or whether authentication is required.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the two-word description represents severe under-specification rather than efficient conciseness. It fails to front-load critical context or earn its place as a useful signal for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter scraping tool with no output schema and no annotations, the description is completely inadequate. It omits the data source/platform, return format, pagination behavior, and how it handles missing data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all three parameters (url, page, product_id) including conditional requirements. The description adds no parameter-specific semantics, meeting the baseline score of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Product Reviews' is tautological, merely restating the tool name without a specific verb (scrape, fetch, analyze) or clarifying the scope. It fails to distinguish this tool from siblings like scrapecreators_product_details or scrapecreators_shop_products.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided. There is no indication of when to use this versus alternatives (e.g., scrapecreators_product_details), no prerequisites, and no mention of the mutual exclusivity between url and product_id parameters beyond what the schema itself states.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_profile_photosDInspect

Profile Photos

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFacebook page URL
cursorNoTo paginate through to the next page
next_page_idNoTo paginate through to the next page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this is read-only (likely, given the 'scrape' prefix), what data is returned, rate limits, or that it specifically targets Facebook pages (only the schema parameter description reveals the Facebook context).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While only two words, this represents under-specification rather than efficient conciseness. The single noun phrase earns no place as it provides zero actionable information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and 3 parameters including pagination controls, the description is completely inadequate. It fails to explain the return format (photo URLs? binary data?), the Facebook-specific context, or how pagination behaves.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('Facebook page URL', pagination descriptions), so the baseline is 3. The description adds no additional semantic context beyond the schema, but does not need to compensate for coverage gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Profile Photos' is a tautology that restates the tool name (scrapecreators_profile_photos) without specifying the action (fetch/scrape), target platform (implied as Facebook only via the schema parameter description), or scope. It fails to distinguish this tool from siblings like scrapecreators_basic_profile or scrapecreators_posts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like scrapecreators_basic_profile. No mention of pagination strategy despite having cursor and next_page_id parameters, nor any prerequisites like valid Facebook page URLs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_profile_postsDInspect

Profile Posts

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoFacebook profile URL
cursorNoTo paginate through the posts
pageIdNoFacebook profile page id
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries none of the behavioral burden. It fails to disclose rate limits, authentication requirements, data retention policies, pagination behavior beyond the cursor parameter itself, or whether the operation is read-only. The agent receives no warning about potential scraping restrictions or data completeness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two words and constitutes under-specification rather than efficient conciseness. No information is front-loaded; the fragment restates the obvious and wastes the opportunity to convey critical context about platform specificity or sibling differentiation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completely inadequate for a 3-parameter tool with no output schema and numerous functional siblings. The description omits the target platform (Facebook), return value structure, pagination methodology, and how it differs from the 5+ other post-retrieval tools in the same suite.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (url, cursor, pageId all documented), establishing a baseline score of 3. The description 'Profile Posts' adds zero semantic value regarding parameter usage, optional vs required status, or input format expectations beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Profile Posts' is essentially a tautology of the tool name with spaces added. While it implies the tool retrieves posts from user profiles, it fails to specify the platform (Facebook, inferred only from schema parameter descriptions) and does not distinguish from siblings like 'scrapecreators_posts', 'scrapecreators_user_posts', or 'scrapecreators_facebook'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance provided. The description lacks any indication of when to use this tool versus alternatives such as 'scrapecreators_facebook_group_posts' or the general 'scrapecreators_facebook' tool. No prerequisites, filtering capabilities, or scope limitations are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_profile_reelsDInspect

Profile Reels

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesFacebook page URL
cursorNoTo paginate through to the next page
next_page_idNoTo paginate through to the next page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description discloses no behavioral traits. It omits critical information such as rate limiting, authentication requirements, data freshness, what fields are returned, or the implications of pagination (cursor vs next_page_id).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the two-word fragment 'Profile Reels' represents under-specification rather than effective conciseness. It lacks sentence structure and fails to front-load critical context, forcing users to infer purpose from parameter names alone.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (social media scraping with pagination support) and absence of output schema or annotations, the two-word description is completely inadequate. It provides no information about return data structure, error handling, or platform-specific constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured data adequately documents all three parameters including the Facebook URL specification and pagination tokens. The description adds no semantic value beyond the schema, but meets the baseline since no compensation for schema gaps is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Profile Reels' is tautological, merely restating the tool name without the underscores. It fails to specify the platform (implied as Facebook only via the URL parameter description), distinguish from sibling tools like scrapecreators_reels or scrapecreators_profile_posts, or clarify what the tool actually does (scrape, list, fetch?).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. There is no indication of when to use this tool versus alternatives such as scrapecreators_profile_posts, scrapecreators_profile_videos, or scrapecreators_reels, nor any prerequisites or pagination workflow guidance despite the presence of cursor parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_profile_videosDInspect

Profile Videos

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed down version of the response
handleYesTikTok handle
regionNoRegion (Country) you want the proxy in. Defaults to US.
sort_byNoWhat to sort by
user_idNoTikTok user id. Use this for faster responses.
max_cursorNoCursor to get more videos. Get 'max_cursor' from previous response.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description adds zero behavioral context. It does not disclose whether the operation is read-only, rate limits, pagination behavior (despite the presence of max_cursor), what data is returned, or the implications of the 'trim' parameter. The full burden of behavioral disclosure falls on the description, which provides none.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (two words), it is inappropriately sized for a tool with 6 parameters and no output schema. It is not 'front-loaded' with useful information; rather, it is under-specified to the point of uselessness. Conciseness requires appropriate sizing, not just brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters targeting a specific platform (TikTok, as revealed only in the schema), the description is severely incomplete. It omits the platform identity, explains nothing about the pagination system indicated by max_cursor, and provides no hint about the response structure or volume of data returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline score is 3. The description 'Profile Videos' adds no additional semantic value regarding the parameters (e.g., it doesn't explain that 'max_cursor' enables pagination, or what 'sort_by' options are available), but the schema adequately documents each field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Profile Videos' is tautological, essentially restating the tool name without clarifying the action (scraping/retrieving) or the specific platform (TikTok). While it identifies the resource type, it fails to specify what the tool actually does or how it differs from sibling tools like scrapecreators_profile_posts or scrapecreators_profile_reels.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives. It does not mention when to use 'handle' versus 'user_id' for lookup, does not reference the pagination mechanism ('max_cursor'), and fails to distinguish this from the many similar profile-scraping siblings (e.g., scrapecreators_profile_photos, scrapecreators_basic_profile).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_redditDInspect

Reddit

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoSubreddit URL
subredditNoSubreddit name. MUST be case sensitive. So 'AskReddit' not 'askreddit'.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description discloses no behavioral traits such as read/write status, rate limits, authentication requirements, or return value structure. The agent has no information about side effects or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the single-word description is not bloated, it is insufficiently sized for the tool's complexity. The content does not earn its place as it provides no actionable information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of numerous Reddit-related siblings and no output schema, the description is inadequate. It fails to explain what data is retrieved, what distinguishes this from other Reddit tools, or what the user can expect from invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters ('url' and 'subreddit'), including helpful case-sensitivity guidance. The description adds no parameter semantics, but with full schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Reddit' is tautological, merely restating the platform from the tool name without specifying what action is performed (e.g., scraping user profiles, posts, or metadata). It fails to distinguish this tool from siblings like scrapecreators_subreddit_posts or scrapecreators_subreddit_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, nor any mention of prerequisites (e.g., whether to provide 'url' or 'subreddit' parameter, or if both are optional). The description offers zero usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_reelsDInspect

Reels

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed response
handleNoInstagram handle
max_idNoCursor for pagination
user_idNoInstagram user id
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, what data structure is returned, rate limits, or the distinction between 'trimmed' and full responses.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While extremely brief (one word), this represents under-specification rather than efficient conciseness. Similar to the 'Process' example in calibration, brevity here indicates missing information rather than optimal information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, no output schema, and numerous similarly-named siblings, the description is completely inadequate. It fails to explain the Instagram platform context, the relationship between handle and user_id parameters, or what constitutes a 'trimmed' response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no information about the parameters, but the input schema has 100% description coverage (handle, user_id, trim, max_id all documented). With high schema coverage, the baseline score is 3 even though the description itself is silent on parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is a single noun 'Reels' which restates the resource implied by the tool name (scrapecreators_reels). It lacks a specific verb explaining the action (fetch, retrieve, list) and fails to distinguish from siblings like scrapecreators_profile_reels, scrapecreators_search_reels, or scrapecreators_reels_using_song.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the 100+ sibling tools, particularly the other reel-related functions. No mention of prerequisites (e.g., requiring either handle or user_id) or pagination behavior despite the max_id parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_reels_using_songDInspect

Reels using Song

ParametersJSON Schema
NameRequiredDescriptionDefault
max_idNoHow you paginate the results. Pass the max_id from the previous response to get the next set of reels.
audio_idYesIf you're looking for this, it is sometimes called 'audio_cluster_id', or it can be just 'audio_id'.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure but provides no behavioral context. It does not indicate whether the operation is read-only, what data structure is returned, rate limits, or whether the tool performs real-time scraping versus cached lookups.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only three words, the description is inappropriately sized for the tool's functionality. It is front-loaded with nouns but lacks the necessary predicate or context to be useful, resulting in under-specification rather than efficient conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only two simple parameters and no output schema, the description is inadequate. It fails to specify that it retrieves Instagram Reels (not TikToks), does not mention the pagination behavior (only implied by the max_id parameter), and provides no indication of the data volume or format returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both parameters (audio_id and max_id), including helpful clarification that audio_id may be called 'audio_cluster_id'. Since the schema fully documents the parameters, the description baseline is 3, though it adds no additional semantic context about the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Reels using Song' is essentially a tautology of the tool name 'scrapecreators_reels_using_song', removing only the prefix. It lacks a specific verb (e.g., 'Retrieve', 'Scrape') and fails to identify the platform (Instagram), which is critical given the sibling tool 'scrapecreators_tiktoks_using_song' performs the same function for a different platform.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like 'scrapecreators_reels', 'scrapecreators_search_reels', or 'scrapecreators_tiktoks_using_song'. The agent cannot determine from the description alone whether to use this for general reel searches or specifically for audio-based discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_adsDInspect

Search Ads

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query
budgetsNoBudgets to filter by
formatsNoFormats to filter by
industriesNoIndustries to filter by
objectivesNoObjectives to filter by
placementsNoPlacements to filter by
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to specify which platforms or ad networks are searched, whether the operation is read-only (implied by 'search' but not stated), rate limits, data freshness, or what structure the results return.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two words, which constitutes under-specification rather than effective conciseness. No sentences exist to earn their place; the description functions merely as a label without structural value or front-loaded critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 6 filter parameters and numerous sibling ad-searching tools with potentially overlapping functionality, the description is grossly incomplete. It does not explain the search scope, return format, pagination, or how it differs from specialized alternatives like scrapecreators_facebook_ad_library.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, all 6 parameters (query, budgets, formats, industries, objectives, placements) are documented in the input schema itself. The description adds no parameter-specific context, but the baseline score of 3 is appropriate since the schema adequately documents the filtering options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search Ads' is a tautology that restates the tool name (scrapecreators_search_ads). While it identifies the resource (ads) and action (search), it fails to distinguish from siblings like scrapecreators_advertiser_search, scrapecreators_company_ads, or platform-specific ad libraries (facebook_ad_library, linkedin_ad_library).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance provided. The description does not indicate when to use this tool versus the numerous sibling ad-related tools, nor does it specify prerequisites, expected query formats, or filtering strategies for the 6 available parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_by_hashtagDInspect

Search by Hashtag

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoSearch for all types of content or only shorts
hashtagYesHashtag to search for
continuationTokenNoContinuation token to get more videos. Get 'continuationToken' from previous response.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not mention rate limits, caching behavior, data freshness, required authentication, what fields are returned, or whether the operation is read-only vs. destructive. The schema suggests 'shorts' support but the description doesn't clarify this.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While extremely brief (3 words), this is under-specification rather than effective conciseness. The single sentence conveys no actionable information beyond the tool name itself and fails to front-load critical context about the platform or content scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex domain (social media scraping), 30+ sibling tools including multiple search variants, absence of annotations, and lack of output schema, the description is grossly incomplete. It provides no context about return values, platform specificity, pagination flow, or differentiation from similar tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (all three parameters: type, hashtag, continuationToken are documented). The description adds no additional semantic information about parameters, but the baseline score of 3 is appropriate since the schema already documents the 'shorts' filtering capability and pagination token usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search by Hashtag' is essentially a tautology that restates the tool name (scrapecreators_search_by_hashtag). It fails to specify what platform or content type is being searched (e.g., YouTube, TikTok, Instagram, or universal), what the search returns, or how it differs from sibling tools like 'scrapecreators_search' or 'scrapecreators_search_by_hashtag_get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling search alternatives (scrapecreators_search, scrapecreators_search_reels, scrapecreators_search_by_keyword, etc.). There is no mention of pagination workflow (when to use the continuationToken parameter) or prerequisites for the search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_by_hashtag_getDInspect

Search by Hashtag

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed response
cursorNoCursor to get more results
regionNoRegion the proxy will be set to
hashtagYesHashtag to search for (without #)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but reveals nothing about read-only status, pagination behavior (despite the cursor parameter), proxy/region handling, or what data structure is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (3 words), this is under-specification masquerading as conciseness. The single sentence earns no place because it provides zero information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters including pagination (cursor) and region proxy settings, and no output schema or annotations, the description is completely inadequate. It should explain the search scope, result format, and pagination behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (hashtag, trim, cursor, region all documented). The description adds no parameter semantics, but baseline 3 is appropriate when schema documentation is comprehensive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search by Hashtag' is a tautology that restates the tool name without specifying the platform (TikTok, Instagram, etc.), the type of content returned, or how it differs from the sibling tool 'scrapecreators_search_by_hashtag' (without _get suffix). It fails to identify the specific resource being searched.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the nearly identical sibling 'scrapecreators_search_by_hashtag' or other search tools. No prerequisites, rate limit warnings, or authentication requirements are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_by_keywordCInspect

Search by Keyword

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed response
queryYesKeyword to search for
cursorNoCursor to get more results
regionNoRegion for proxy
sort_byNoSort by
date_postedNoTime frame filter
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not confirm the read-only nature implied by 'search', explain pagination behavior beyond the cursor parameter, or disclose rate limits, costs, or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At three words, the description is severely underspecified rather than efficiently concise. It fails to earn its place by providing necessary context for a tool with 6 parameters and many similar siblings, resulting in under-specification.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of 6 parameters, no output schema, no annotations, and a crowded namespace of similar search tools, the description is incomplete. It does not differentiate this tool from siblings or explain what the search returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all 6 parameters (query, trim, cursor, region, sort_by, date_posted). While the description adds no parameter-specific guidance, the baseline score of 3 is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search by Keyword' is a tautology that restates the tool name (scrapecreators_search_by_keyword) without specifying the resource domain (creators, content, ads) or distinguishing from numerous sibling search tools like scrapecreators_search or scrapecreators_search_by_hashtag.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. The description fails to clarify how keyword search differs from the generic scrapecreators_search or hashtag-based search, leaving agents without selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_for_companiesDInspect

Search for Companies

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeyword to search for
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate what data source is queried, what fields are returned, rate limits, or whether results include company profiles, ads, or contact information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the three-word description represents under-specification rather than effective conciseness. It fails to front-load critical domain context or distinguish from siblings, wasting the limited space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool suite (100+ sibling tools for social media scraping), the description is inadequate. It does not clarify what constitutes a 'company' in this context (e.g., advertisers on Meta/TikTok vs. generic businesses) or explain return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single 'query' parameter is documented as 'Keyword to search for'). The description adds no additional semantic context beyond the schema, but baseline 3 is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search for Companies' is essentially a tautology of the tool name (scrapecreators_search_for_companies). While it identifies the verb and resource, it fails to distinguish this tool from siblings like 'scrapecreators_advertiser_search' or 'scrapecreators_search', leaving the specific scope (social media companies? advertisers? brands?) undefined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. Given the presence of similar sibling tools (scrapecreators_search, scrapecreators_advertiser_search, scrapecreators_search_users), the description offers no criteria for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_getDInspect

Search

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed down version of the response
queryYesSearch query
cursorNoCursor
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits: whether this is read-only, what the response structure looks like, pagination behavior (despite having a cursor parameter), or rate limiting. The description carries the full disclosure burden but provides nothing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While only one word, this represents under-specification rather than effective conciseness. The single word 'Search' fails to front-load any meaningful context about scope, domain, or differentiators in a tool ecosystem with 80+ alternatives.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex domain (social media scraping), absence of output schema, and presence of numerous similar search variants, the description is inadequate. It does not explain return values, pagination, or what distinguishes this GET endpoint from the POST 'scrapecreators_search' variant.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 parameters have descriptions), establishing a baseline of 3. The description adds no additional semantic context for 'query', 'trim', or 'cursor' beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search' is a tautology that restates the tool name 'scrapecreators_search_get' without specifying what resource is being searched (creators, ads, content?) or how it differs from sibling tools like 'scrapecreators_search_users_get' or 'scrapecreators_search_ads'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling search alternatives (scrapecreators_search, scrapecreators_search_users_get, scrapecreators_top_search, etc.). Given the crowded namespace, differentiation is essential but absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_reelsDInspect

Search Reels

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoThe page number to return.
queryYesThe keyword to search for
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description carries the full burden of behavioral disclosure. It fails to mention rate limits, authentication requirements, data freshness, pagination behavior beyond the parameter name, or what the search returns (metadata, videos, engagement stats).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at two words, this represents under-specification rather than effective conciseness. The single sentence fails to earn its place by providing actionable context, leaving critical questions (platform, return format) unanswered.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of social media search tools and the absence of an output schema, the description is inadequate. It omits the target platform, result format, and distinguishing features from 100+ sibling tools, leaving agents without sufficient context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for both 'query' and 'page'. The description adds no additional semantic context (e.g., expected query format, pagination limits), warranting the baseline score of 3 for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search Reels' is essentially a tautology that restates the tool name without clarifying the platform (Instagram, TikTok, or cross-platform) or distinguishing from siblings like 'scrapecreators_reels' or 'scrapecreators_search'. It identifies the resource but lacks specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as 'scrapecreators_search' (general search), 'scrapecreators_profile_reels' (user-specific reels), or 'scrapecreators_search_by_hashtag'. No prerequisites or filtering capabilities mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_usersDInspect

Search Users

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed response
queryYesUsername to search for
cursorNoCursor to get more results
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to mention pagination behavior (despite the cursor parameter), what constitutes a 'trimmed' response, rate limits, or what data structure is returned. No safety or mutation characteristics are described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (two words), this represents underspecification rather than efficient conciseness. The phrase wastes no words but provides insufficient information to justify selection over sibling tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's apparent complexity (searching across creator platforms) and lack of output schema or annotations, the description is inadequate. It omits expected return values, platform scope, and differentiating factors from the numerous sibling search tools available.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all three parameters have descriptions in the schema). The description adds no additional parameter context, but baseline 3 is appropriate when the schema already fully documents inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search Users' is tautological—it merely restates the tool name (scrapecreators_search_users) without adding specificity about which platform or user types are searchable. It fails to distinguish from sibling tools like scrapecreators_search or scrapecreators_search_users_get.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. The description does not clarify the difference between searching users (this tool) versus searching ads, hashtags, or general content (siblings like scrapecreators_search, scrapecreators_search_ads), nor when to use the GET variant (scrapecreators_search_users_get).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_search_users_getCInspect

Search Users

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesUsername to search for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It fails to indicate what data is returned (user IDs, profiles, metadata?), whether the operation is read-only, or any rate limiting concerns. It only implies a search operation by its name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, this represents under-specification rather than effective conciseness. The single sentence fails to earn its place by providing actionable information beyond the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter, the description is incomplete due to the lack of output schema and the existence of ambiguous siblings. It does not clarify the return format or differentiate from similar tools, leaving agents uncertain about selection and invocation context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already fully documents the 'query' parameter as 'Username to search for'. The description adds no additional semantic context beyond the schema, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search Users' is a tautology that restates the tool name fragment 'search_users' without adding specificity. It fails to distinguish this tool from the sibling 'scrapecreators_search_users' (without _get suffix) or clarify what platform/user type is being searched.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling search tools (e.g., 'scrapecreators_search_users', 'scrapecreators_search', 'scrapecreators_search_get'). No prerequisites, exclusions, or selection criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_shop_productsDInspect

Shop Products

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe TikTok Shop store URL.
cursorNoCursor parameter from the previous response to retrieve the next page of products. Omit for the first page.
regionNoRegion to get shop products from. Defaults to US if not provided.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this performs live scraping, what data fields are returned, error handling behavior, or authentication requirements. It also fails to disclose if the operation is read-only or has side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, this represents under-specification rather than efficient conciseness. The description is too minimal to front-load any meaningful context, failing the standard that every sentence (or phrase) must earn its place by conveying actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter scraping tool with no output schema, the description is woefully incomplete. It fails to explain the TikTok Shop-specific context (evident only in the schema), expected return structure, or how to handle the paginated results implied by the cursor parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description 'Shop Products' adds no semantic value regarding the parameters (e.g., expected URL format for 'url', pagination logic for 'cursor', or valid values for 'region'), relying entirely on the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Shop Products' is a near-tautology that restates the tool name without clarifying the action (scraping vs. searching) or scope. It fails to distinguish from siblings like 'scrapecreators_tiktok_shop' or 'scrapecreators_product_details', though the schema reveals this specifically targets TikTok Shop store URLs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'scrapecreators_tiktok_shop' or 'scrapecreators_product_details'. No prerequisites, rate limit warnings, or pagination strategy mentioned despite the cursor parameter implying multi-page results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_snapchatDInspect

Snapchat

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesSnapchat username
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure but provides zero behavioral context. It does not indicate whether the operation is read-only, what data structure is returned, rate limits, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (one word), this is under-specification rather than effective conciseness. The single word fails to front-load any actionable information about the tool's functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of numerous sibling platform-scraping tools, the description should clarify this retrieves Snapchat creator data. As written, it provides insufficient context to distinguish this tool's specific value proposition within the tool suite.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (handle is described as 'Snapchat username'), establishing the baseline. The description adds no additional parameter context (format examples, validation rules), but the schema is self-sufficient for a single-parameter tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Snapchat' is a tautology that restates the platform implied by the tool name (scrapecreators_snapchat) but fails to specify what action is performed (e.g., scraping creator profiles). It does not distinguish from sibling tools like scrapecreators_instagram or scrapecreators_tiktok beyond naming the platform.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the 20+ other scrapecreators_* platform tools. No mention of prerequisites (e.g., public vs private Snapchat accounts) or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_story_highlightsDInspect

Story Highlights

ParametersJSON Schema
NameRequiredDescriptionDefault
handleNoInstagram handle. Use user_id for faster response times.
user_idNoInstagram user id. Use for faster response times.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It fails to indicate whether the operation is read-only, rate-limited, or what data format is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the description suffers from under-specification rather than efficient conciseness. Two words fail to front-load any actionable value for an agent selecting between multiple Instagram-related tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema with 100% coverage, less description is required than for complex tools. However, with no output schema provided, the description should explain return values or data structure, which it does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with parameter descriptions clearly indicating Instagram handle/user_id usage and performance preferences. The description adds no additional parameter context, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Story Highlights' is a tautology that restates the tool name without adding specificity. While it identifies the resource type, it fails to specify the Instagram platform context or distinguish from sibling tools like 'scrapecreators_highlights_details'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, nor any mention of prerequisites (e.g., public vs. private profiles). The description lacks any 'when to use' or 'when not to use' indicators.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_subreddit_postsDInspect

Subreddit Posts

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort order
trimNoSet to true for a trimmed response
afterNoCursor for pagination
subredditYesSubreddit name (case sensitive)
timeframeNoTimeframe filter
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this is read-only, what data structure is returned, how pagination behaves (despite the 'after' parameter), or any error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, this represents under-specification rather than effective conciseness. No information is front-loaded because no actionable information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completely inadequate for a 5-parameter tool with no annotations and no output schema. The description fails to explain return values, authentication requirements, or the relationship between the 'sort', 'timeframe', and 'trim' parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (all 5 parameters have descriptions), establishing a baseline of 3. The description adds no parameter-specific context, but the schema adequately documents required vs optional fields and basic types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Subreddit Posts' is a tautology that restates the tool name without specifying the action (fetch, list, scrape). It lacks a verb and fails to differentiate from siblings like 'scrapecreators_subreddit_search' or 'scrapecreators_reddit'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the 90+ sibling tools available, including similar Reddit-related functions. No prerequisites, rate limit warnings, or selection criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_threadsCInspect

Threads

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesThreads username
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose behavioral traits such as whether this is read-only or destructive, rate limits, pagination behavior, or what data structure is returned. The description carries the full burden and provides minimal information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, this represents under-specification rather than effective conciseness. A single noun is insufficient to convey tool purpose, especially with no output schema to provide additional context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one simple parameter, the description is inadequate. It fails to explain what the tool returns, what 'Threads' content is retrieved, or how it relates to the broader scrapecreators ecosystem.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with the 'handle' parameter described as 'Threads username'. The description adds no semantic information beyond the schema, but with complete schema coverage, this meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Threads' identifies the target platform but provides no action verb or resource specification. It does not clarify what data is retrieved (profile, posts, metadata) or distinguish this tool from sibling platform tools like scrapecreators_instagram or scrapecreators_twitter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. Given numerous sibling tools for different platforms (scrapecreators_tiktok, scrapecreators_facebook) and content types (scrapecreators_posts, scrapecreators_basic_profile), the absence of selection criteria is a critical gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_tiktokDInspect

TikTok

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTikTok handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description discloses nothing about behavioral traits: no mention of rate limits, authentication requirements, return format, or whether the operation is read-only or destructive. The description carries the full burden and fails completely.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, this is under-specification rather than effective conciseness. A single word fails to front-load any meaningful information about the tool's function, leaving the agent with no actionable context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and zero annotations, a 1-parameter tool still requires description of what data is returned and how it differs from related TikTok scrapers. The description provides insufficient context for invocation decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'handle' parameter is documented as 'TikTok handle'), the baseline score is 3. The description adds no additional semantic value regarding parameter format or validation rules, but the schema carries the load adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is simply 'TikTok', which is a tautology restating the platform name without any verb or action. It fails to specify what data is retrieved (profile, posts, videos?) and does not distinguish from sibling tools like scrapecreators_tiktok_live or scrapecreators_tiktok_shop.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous alternatives (scrapecreators_tiktok_shop, scrapecreators_tiktok_live, etc.) or prerequisites needed. The description offers zero selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_tiktok_liveDInspect

TikTok Live

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTikTok handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description discloses no behavioral traits: it doesn't indicate if this retrieves real-time vs historical live data, what fields are returned, rate limits, or whether the handle must be active. The description carries the full burden and provides nothing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, it is inappropriately terse for a scraping tool with complex output. It is not 'concise' in the productive sense—every sentence should earn its place, but here there is virtually no information content to evaluate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this appears to be a scraping tool (implied by 'scrapecreators' prefix) with no output schema, the description fails to compensate by explaining what Live data is retrieved (chat, viewers, stream status, replays). Completely inadequate for the apparent complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('TikTok handle'), the schema adequately documents the single parameter. The description adds no additional context (e.g., format with/without '@', examples), but baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'TikTok Live' is essentially a noun phrase that restates the tool name without specifying the action (e.g., 'retrieve', 'scrape', 'monitor') or what specific Live data is returned. It fails to distinguish from sibling tool 'scrapecreators_tiktok' which likely handles non-Live content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the general 'scrapecreators_tiktok' tool or other alternatives. No prerequisites (e.g., whether the user must currently be live) or conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_tiktok_shopDInspect

TikTok Shop

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number to retrieve
queryYesTerm you want to search for
regionNoRegion to search shop products in.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits such as rate limits, data freshness, return format, or whether this is a read operation. The description carries the full burden and provides zero behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (two words), this represents under-specification rather than effective conciseness. The extreme brevity fails to communicate necessary context, making it impossible for an agent to understand the tool's function from the description alone.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters, no output schema, no annotations, and numerous sibling tools with similar naming patterns, the description is completely inadequate. It fails to explain what data is returned or how it differs from related shop/tiktok tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (query, page, region). The description adds no additional semantic information about parameters, but the baseline score of 3 is appropriate given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'TikTok Shop' is a tautology that restates the tool name without specifying what action is performed (search, scrape, retrieve). It fails to distinguish from siblings like scrapecreators_tiktok (general content) or scrapecreators_amazon_shop (different platform).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives such as scrapecreators_shop_products or scrapecreators_tiktok. No mention of prerequisites, required authentication, or specific use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_tiktoks_using_songDInspect

TikToks using Song

ParametersJSON Schema
NameRequiredDescriptionDefault
clipIdNoThis is clipId. Can be found on a url like so: https://www.tiktok.com/music/That%27s-Who-I-Praise-7370375686554782506, where 7370375686554782506 is the clipId
cursorNoThe cursor to get the next page of results.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether this is read-only, what data structure is returned (video metadata, URLs, engagement stats), rate limits, or whether the operation is idempotent. The agent has no insight into side effects or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (four words), this is under-specification masquerading as conciseness. The description fails to earn its place by providing actionable context. It lacks a verb and any structural organization that would help an agent understand its function at a glance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what the tool returns (e.g., a list of TikTok videos using the specified song). It also should clarify the relationship to similar tools. Currently, it provides insufficient context for an agent to predict the tool's utility or output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information, but the input schema has 100% description coverage. The schema clearly documents that clipId is extracted from TikTok music URLs and that cursor handles pagination, so the description meets the baseline expectation when the schema is self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'TikToks using Song' is essentially a tautology of the tool name (scrapecreators_tiktoks_using_song). It identifies the subject matter but lacks any verb indicating what the tool does (retrieve, list, scrape) and fails to distinguish from siblings like scrapecreators_reels_using_song or scrapecreators_get_song_details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. There is no mention of prerequisites (e.g., needing a clipId from a TikTok music URL), when to use pagination (cursor), or how this differs from searching TikToks by hashtag or keyword using sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_transcriptDInspect

Transcript

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesInstagram post or reel URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether the operation is read-only, what happens if the URL is invalid or private, rate limits, or whether the returned transcript includes audio transcription, captions, or comments.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at a single word, this represents under-specification rather than efficient conciseness. No information is front-loaded; the single word fails to earn its place by conveying actionable meaning to the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter is well-documented in the schema but no output schema exists, the description should explain what the transcript contains (audio transcription vs. captions) and distinguish usage from 'scrapecreators_transcript_get'. It provides none of this necessary context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents the 'url' parameter as 'Instagram post or reel URL'. The description adds no additional semantic value regarding URL formats or examples, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Transcript' is tautological—simply restating the tool name (scrapecreators_transcript) without explaining the action performed (e.g., 'extract', 'fetch', or 'generate'). It lacks a specific verb and fails to distinguish from the sibling tool 'scrapecreators_transcript_get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, particularly the sibling 'scrapecreators_transcript_get'. No mention of prerequisites, Instagram URL requirements, or when transcription is available versus unavailable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_transcript_getDInspect

Transcript

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTikTok video URL
languageNoLanguage of the transcript. 2 letter language code, ie 'en', 'es', 'fr', 'de', 'it', 'ja', 'ko', 'zh'
use_ai_as_fallbackNoSet to 'true' to use AI as a fallback to get the transcript if the transcript is not found. Costs 10 credits to use this feature.
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fails to disclose critical behavioral traits: it mentions nothing about the 10-credit cost for AI fallback (only in schema), rate limits, what happens when transcripts are unavailable, or the return data structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the single-word description is under-specification rather than effective conciseness. It fails to front-load any actionable context about the tool's function or constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with cost implications (credit usage) and fallback logic, the description is grossly incomplete. It relies entirely on the schema and provides no high-level context about the retrieval process or output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 parameters have descriptions), establishing a baseline score of 3. The description adds no semantic value beyond the schema, but none is needed given the comprehensive parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Transcript' is a tautology that restates the tool name ('scrapecreators_transcript_get') without clarifying what the tool actually does (retrieve TikTok video transcripts) or how it operates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the sibling 'scrapecreators_transcript' (without '_get'), when to enable the AI fallback parameter, or prerequisites like valid TikTok URL formats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_truth_socialDInspect

Truth Social

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTruth Social username
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosing behavioral traits, yet it states nothing about whether this is a read-only operation, what data is retrieved, rate limits, or potential errors. The agent has no indication of the tool's side effects or output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the description is inappropriately sized—underspecified rather than efficiently concise. It consists of a single label with no sentences to evaluate for information density or front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a social media scraping tool with no output schema and no annotations, the description is grossly incomplete. It fails to explain what data is returned (profile metadata, posts, followers), pagination behavior, or data format, leaving critical gaps in the agent's understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (defining 'handle' as 'Truth Social username'), so the baseline score applies. The description itself adds no parameter information, but the schema is sufficient to understand the single required input.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Truth Social' merely labels the target platform, functioning as a tautology that restates part of the tool name. It fails to specify what action the tool performs (e.g., scrape profile, fetch posts) or what resource it returns, leaving the agent to guess based solely on naming conventions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the numerous sibling scraping tools (e.g., scrapecreators_twitter, scrapecreators_instagram). There are no prerequisites, exclusions, or alternative suggestions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_truth_social_webhookDInspect

Truth Social Webhook

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure, yet it reveals nothing about side effects (does it mutate webhook configurations?), authentication requirements, rate limits, or the nature of webhook events handled. It is completely opaque regarding behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the description is only three words, this represents under-specification rather than efficient conciseness. The single 'sentence' does not earn its place because it adds no actionable information beyond what is already evident in the tool name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of webhook management tools and the lack of annotations, output schema, or parameters, the description is completely inadequate. It fails to explain the webhook lifecycle, expected interactions, or how this tool fits into the broader scrapecreators ecosystem.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation rules, with 0 parameters, the baseline score is 4. No parameter semantics are required or provided in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Truth Social Webhook' essentially restates the tool name (tautology) and fails to specify what the webhook actually does (register, receive, list, or delete webhooks?). While it identifies the platform (Truth Social), it does not distinguish from sibling tool 'scrapecreators_truth_social' or explain the webhook mechanism.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided. There is no indication of when to use this tool versus the sibling 'scrapecreators_truth_social' tool, no prerequisites for webhook registration, and no mention of when this tool should not be used.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_tweet_detailsDInspect

Tweet Details

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTweet URL
trimNoSet to true for a trimmed response
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden yet reveals nothing about side effects, rate limits, authentication requirements, or what constitutes a 'trimmed' response versus full data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While not verbose, the two-word description is inappropriately sized for the complexity of the tool ecosystem. It suffers from under-specification rather than efficient conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the crowded sibling namespace (80+ scrapecreators tools) and lack of output schema, the description fails to specify return data structure or differentiate this single-tweet lookup from bulk retrieval tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both 'url' and 'trim' have descriptions), establishing baseline 3. The description adds no additional semantic context (e.g., expected URL format, what fields are excluded when trimmed).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Tweet Details' is tautological, restating the tool name without adding specificity. It fails to indicate the action (retrieve/scrape) or distinguish from siblings like 'scrapecreators_user_tweets' or 'scrapecreators_twitter'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the many sibling alternatives (e.g., scrapecreators_user_tweets for user timelines, scrapecreators_comments for replies). No prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_twitchDInspect

Twitch

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTwitch handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description discloses zero behavioral traits. It does not indicate whether this is read-only, what data is returned, pagination behavior, or authentication requirements. The agent has no information about what 'scraping' entails or what Twitch data structure to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (single word), this represents under-specification rather than effective conciseness. The description fails the 'every sentence earns its place' standard because it provides no actionable information beyond what is already encoded in the tool name itself.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and a single undocumented string parameter in the description, the definition is inadequate for a data scraping tool. The description does not leverage the sibling tool context to clarify what specific Twitch creator data is retrieved (profile, streams, clips, etc.).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents the single 'handle' parameter as 'Twitch handle'. The description adds no additional parameter context, but per calibration rules, high schema coverage establishes a baseline of 3 even without description assistance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Twitch' only identifies the target platform/resource but fails to specify what action the tool performs (e.g., scrape profile, get videos, fetch analytics). Given the tool name 'scrapecreators_twitch', the description merely restates the obvious platform suffix without adding the verb or scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous sibling scraping tools (scrapecreators_youtube, scrapecreators_instagram, etc.). No prerequisites, rate limit warnings, or platform-specific constraints are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_twitterDInspect

Twitter

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTwitter handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate what data structure is returned, whether the operation is read-only, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the single-word description represents under-specification rather than effective conciseness. It fails to earn its place by providing actionable information beyond what is already implied by the tool name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, annotations, and the presence of numerous specific sibling tools, the description is completely inadequate. It fails to explain what specific Twitter data is scraped or returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter information, but the input schema has 100% coverage describing the 'handle' parameter as 'Twitter handle'. With complete schema coverage, the baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Twitter' identifies the platform but provides no specific verb or resource indication. It fails to distinguish from siblings like scrapecreators_user_tweets or scrapecreators_tweet_details, leaving the agent unsure what specific Twitter data this tool retrieves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus the numerous alternatives (e.g., scrapecreators_user_tweets, scrapecreators_tweet_details, scrapecreators_basic_profile). No prerequisites, filtering guidance, or selection criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_user_boardsDInspect

User Boards

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed down version of the response
handleYesThe username of the user to get boards for. (e.g. broadstbullycom from https://www.pinterest.com/broadstbullycom/)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, what data structure is returned, whether it requires authentication, or any side effects. The description adds zero behavioral context beyond the tool name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the two-word description is brief, it represents under-specification rather than effective conciseness. The text wastes no words but also fails to earn its place by providing actionable information to the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and the existence of ambiguous sibling tools (scrapecreators_board), the description is inadequate. It does not explain what constitutes a 'board' (Pinterest context implied only in the schema example), what fields are returned, or how this tool differs from related tools in the scrapecreators suite.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear explanations for both 'handle' (including a helpful Pinterest URL example) and 'trim'. Since the schema fully documents the parameters, the description is not penalized for omitting parameter details, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'User Boards' is essentially a tautology that restates the tool name (scrapecreators_user_boards) without adding a specific action verb (retrieve, list, scrape) or clarifying the resource type (Pinterest boards). It fails to distinguish from the sibling tool 'scrapecreators_board' which likely retrieves a specific board rather than all boards for a user.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'scrapecreators_board', nor does it mention prerequisites such as needing a valid Pinterest username or rate limiting considerations. There is no 'when-not-to-use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_user_postsDInspect

User Posts

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed down version of the response
handleNoTruth Social username
user_idNoTruth Social user id. Use this for faster response times. Trumps is 107780257626128497. It is the 'id' field in the profile endpoint.
next_max_idNoUsed to paginate to next page
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to explain what the 'trim' parameter removes from responses, pagination limits, rate limiting, or what data structure is returned. The schema mentions Truth Social-specific details, but the description itself is silent on platform-specific behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (two words), this represents under-specification rather than efficient conciseness. The description fails to front-load critical context (platform, action verb) that would help an agent select this tool from the large sibling set.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter scraping tool with no output schema and no annotations, the description is completely inadequate. It omits the platform (Truth Social), the action (scraping/fetching), output format, and how it relates to the broader scrapecreators tool suite.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline score is 3. The schema adequately documents parameters (including the helpful example of Trump's user ID and the pagination mechanism), so the description's lack of parameter commentary is acceptable, though it adds no supplemental context about the 'trim' functionality or pagination depth limits.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose1/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'User Posts' is tautological—it merely restates the tool name (scrapecreators_user_posts) without adding specificity. It fails to identify the target platform (Truth Social) or distinguish from siblings like scrapecreators_user_tweets or scrapecreators_profile_posts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. The description does not mention prerequisites (e.g., requiring either handle or user_id), performance considerations (user_id being faster), or when to paginate using next_max_id.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_users_audience_demographicsDInspect

User's Audience Demographics

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTikTok handle
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate whether the operation is read-only, if there are rate limits, what specific demographic fields are returned, or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (3 words), the description is under-specified rather than efficiently concise. It lacks sentences that earn their place—no information is provided about scope, return values, or specific use cases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description should specify what demographic data is returned (age, gender, location, etc.) and confirm the platform context. Currently, it provides insufficient context for effective tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with the 'handle' parameter clearly described as 'TikTok handle'. Since the schema fully documents the parameter, the description baseline is 3, though the description itself adds no additional semantic context about the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'User's Audience Demographics' is essentially a tautology that restates the tool name without adding specificity. It lacks a verb indicating the action (e.g., retrieve/fetch), omits the platform (TikTok), and fails to distinguish from similar sibling tools like 'scrapecreators_age_and_gender'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided regarding when to use this tool versus alternatives, prerequisites for use, or what specific insights it provides compared to other audience analysis tools in the scrapecreators suite.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_user_tweetsCInspect

User Tweets

ParametersJSON Schema
NameRequiredDescriptionDefault
trimNoSet to true for a trimmed response
handleYesTwitter handle
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides almost none. It does not indicate whether the operation is read-only (though implied), what volume of tweets is returned, rate limits, authentication requirements, or what happens if the handle is invalid or private.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief (two words), which avoids verbosity, but it is under-specified rather than efficiently informative. It lacks a clear verb or sentence structure that would front-load the action performed by the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and the dense ecosystem of similar sibling tools (scrapecreators_*), the description is incomplete. It fails to describe the return format, data fields included, or how this tool differs from platform-generic alternatives like 'scrapecreators_user_posts'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (handle as 'Twitter handle', trim as 'Set to true for a trimmed response'), establishing a baseline score. The tool description adds no additional semantic context beyond what the schema already provides, but does not contradict it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'User Tweets' is a tautology that restates the tool name without adding specificity. It fails to distinguish from similar sibling tools like 'scrapecreators_user_posts', 'scrapecreators_twitter', or 'scrapecreators_tweet_details', leaving the agent uncertain about which to select for Twitter data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., scrapecreators_tweet_details for specific tweets) or prerequisites (e.g., whether the handle must be public). The description offers no 'when-to-use' or 'when-not-to-use' signals.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_video_infoDInspect

Video Info

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTikTok video URL
trimNoSet to true to get a trimmed response
regionNoRegion of the proxy. Sometimes you'll need to specify the region if you're not getting a response. Commonly for videos from the Phillipines, in which case you'd use 'PH'. Use 2 letter country codes like US, GB, FR, etc
get_transcriptNoGet transcript of the video
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure, yet it states nothing about read-only safety, rate limits, authentication needs, or what data structure is returned. The agent cannot determine if this is a safe read operation or what 'trim' affects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, this constitutes under-specification rather than efficient conciseness. No sentences exist to earn their place; the description is content-free.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter tool with no annotations and no output schema, the description is woefully incomplete. It omits the platform (TikTok), return value expectations, and behavioral constraints that the agent needs for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the parameters are fully documented in the schema (url, trim, region, get_transcript). The description adds no parameter context, but baseline 3 is appropriate since the schema carries the semantic weight.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Video Info' is a tautology that restates the tool name without specifying the action (retrieve/fetch) or scope. It fails to distinguish this from siblings like 'scrapecreators_tiktok' or 'scrapecreators_transcript', though the schema reveals it targets TikTok specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'scrapecreators_transcript' (which overlaps with the get_transcript parameter) or 'scrapecreators_tiktok'. No mention of when to specify the region parameter or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_videoshort_detailsCInspect

Video/Short Details

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesYouTube video or short URL
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but provides none. It does not indicate what data structure is returned, whether the operation is read-only, rate limits, or if authentication is required for the YouTube scraping.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at only two words, the description suffers from under-specification rather than efficient conciseness. No information is front-loaded; the text merely labels the tool without explaining its function, wasting the agent's attention.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, combined with a complex sibling ecosystem of overlapping functionality, the description is incomplete. It fails to clarify scope (single video vs. channel videos) or return value structure, leaving agents uncertain about tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'url' parameter is documented as 'YouTube video or short URL'), so the schema carries the semantic weight. The description adds nothing beyond the schema, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Video/Short Details' is essentially a tautology that restates the tool name without specifying what action it performs (scrape, fetch, retrieve) or what specific details are returned. It fails to distinguish this tool from siblings like 'scrapecreators_video_info' or 'scrapecreators_channel_shorts'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the numerous alternatives (e.g., scrapecreators_video_info, scrapecreators_channel_videos, scrapecreators_reels). Given the crowded namespace of similar scraping tools, explicit differentiation is critical but absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scrapecreators_youtubeDInspect

YouTube

ParametersJSON Schema
NameRequiredDescriptionDefault
urlNoYouTube channel URL. Can pass a channelId, handle or url
handleNoYouTube channel handle. Can pass a channelId, handle or url
channelIdNoYouTube channel ID. Can pass a channelId, handle or url
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description discloses nothing about return values, rate limits, authentication requirements, or whether the operation is read-only. The single word provides zero behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, this is under-specification rather than effective conciseness. The single word fails to front-load critical information about the tool's function, leaving the description structurally incomplete.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool likely retrieves YouTube creator data and has three parameters with no output schema, the description is completely inadequate. It should explain what channel data is returned and how to use the flexible parameter set.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema itself documents that each parameter accepts channelId, handle, or url. The description adds no parameter semantics, but the high schema coverage meets the baseline requirement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'YouTube' is a tautology that restates the platform name without specifying the action (scrape/fetch) or resource (creator/channel data). While the tool name 'scrapecreators_youtube' hints at the purpose, the description itself fails to specify what the tool actually retrieves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like 'scrapecreators_channel_videos' or 'scrapecreators_transcript', nor which of the three interchangeable parameters (url, handle, channelId) should be preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tavily_researchBInspect

Perform comprehensive research on a given topic or question. Use this tool when you need to gather information from multiple sources, including web pages, documents, and other resources, to answer a question or complete a task. Returns a detailed response based on the research findings. Rate limit: 20 requests per minute.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesA comprehensive description of the research task
modelNoDefines the degree of depth of the research. 'mini' is good for narrow tasks with few subtopics. 'pro' is good for broad tasks with many subtopicsauto
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While it mentions 'multiple sources' and 'detailed response,' it fails to disclose critical behavioral traits: whether this performs iterative searches, approximate latency/cost relative to search, synthesis methodology, or rate limits. For a complex research tool, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose declaration, usage trigger, and return value. Every sentence earns its place with zero redundancy or filler. Information is front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation but incomplete given the tool complexity and sibling landscape. It mentions the return type ('detailed response') compensating somewhat for missing output schema, but omits critical context about trade-offs versus 'tavily_search' (speed, depth, cost) that would guide agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents both parameters (the 'input' task description and 'model' enum with mini/pro definitions). The description adds no parameter-specific guidance, but the high schema coverage establishes a baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Perform comprehensive research') and identifies the resource (topic/question). It implicitly distinguishes from sibling 'tavily_search' through the emphasis on 'comprehensive' and 'multiple sources,' though it doesn't explicitly name the sibling or clarify the boundary between research and search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides positive guidance ('Use this tool when you need to gather information from multiple sources...') indicating scenarios requiring depth. However, it lacks negative guidance or explicit comparison to 'tavily_search,' leaving the agent to infer when to prefer search over research.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate-content-before-foldBInspect

Check if content meets the before-fold character and line limits for each social media platform

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text content to validate
platformYesSocial media platform
contentTypeNoContent type (only relevant for YouTube: 'title' or 'description')post
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While it introduces the 'before-fold' concept, it fails to disclose what the validation returns (boolean pass/fail, specific metrics, error details), whether the operation is read-only, or any rate limiting considerations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of 12 words. It is front-loaded with the action verb 'Check' and contains no redundant or wasteful text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the core validation purpose but lacks completeness given the absence of an output schema and annotations. It does not describe the validation result format or clarify the relationship to sibling tools, leaving gaps in the agent's understanding of the full operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all parameters. The description adds minimal semantic value beyond the schema, though it does contextualize the 'platform' parameter by mentioning platform-specific limits.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Check') and clearly identifies the resource (content) and scope (before-fold character and line limits for social media platforms). However, it does not explicitly distinguish from the sibling tool 'get-text-before-fold', which likely retrieves rather than validates content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or when not to use it. Given the existence of the sibling 'get-text-before-fold', explicit differentiation would be valuable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

web_search_exaAInspect

Search the web for any topic and get clean, ready-to-use content.

  Best for: Finding current information, news, facts, people, companies, or answering questions about any topic.
  Returns: Clean text content from top search results.

  Query tips: 
  describe the ideal page, not keywords. "blog post comparing React and Vue performance" not "React vs Vue".
  Use category:people / category:company to search through Linkedin profiles / companies respectively.
  If highlights are insufficient, follow up with web_fetch_exa on the best URLs.
ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language search query. Should be a semantically rich description of the ideal page, not just keywords. Optionally include category:<type> (company, people) to focus results — e.g. 'category:people John Doe software engineer'.
numResultsNoNumber of search results to return (must be a number, default: 10).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses output characteristics ('Clean text content', 'ready for LLM use'), but omits operational details like rate limits, authentication requirements, caching behavior (only hinted at via 'livecrawl' parameter), or error handling scenarios.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with three distinct, high-value sentences: purpose declaration, usage guidance ('Best for'), and return value specification ('Returns'). Zero waste, front-loaded, and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately compensates by describing return values ('Clean text content from top search results'). However, it misses the opportunity to clarify relationship boundaries with similar search tools (tavily_search) and lacks operational warnings given the 5-parameter complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds no parameter-specific guidance beyond the schema (e.g., no advice on when to use 'fast' vs 'auto' type, or 'preferred' vs 'fallback' livecrawl), but the schema itself is sufficiently descriptive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Search the web') and output ('clean, ready-to-use content'). However, it fails to distinguish this tool from sibling search tools like 'tavily_search' or 'tavily_research', leaving ambiguity about which web search tool to select.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Best for:' line provides implied usage context (current information, news, facts), but lacks explicit when-not-to-use guidance or named alternatives. Given the presence of specialized search siblings (tavily_search, various social scrapers), explicit differentiation would improve selection confidence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources