Peec AI MCP Server
Server Details
Connect your AI assistant to your Peec AI account to monitor and analyze your brand's visibility across AI search engines like ChatGPT, Perplexity, and Gemini. Ask questions about brand visibility, competitor comparisons, source citations, and trends: all in plain language, directly from your AI tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 8 of 8 tools scored.
Each tool has a clearly distinct scope: three report types cover different granularity levels (brand, domain, URL) while five list tools cover separate entity types (projects, brands, prompts, tags, topics). No overlapping functionality between tools.
Perfectly consistent snake_case naming throughout. Follows clear verb_noun patterns: 'get_*_report' for analytics generation and 'list_*' for entity discovery. No deviations in style or structure.
Eight tools is well-suited for this focused analytics domain. Covers three reporting levels plus necessary metadata lookup tools for filtering, without bloat. Each tool earns its place in the AI search visibility analysis workflow.
Strong coverage for read-only analytics with brand/domain/URL reports and full filtering dimension support. Minor gaps: no single-item retrieval (e.g., get_brand vs list_brands) and no chat-level detail tool, though chat_id filtering in reports provides partial coverage.
Available Tools
38 toolscreate_brandCreate BrandAInspect
Create a new brand (competitor or own) tracked in a project. Returns the created brand id. Confirm with the user before calling — this mutates project data.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Brand name | |
| color | No | Hex color like #1A2B3C | |
| regex | No | Optional regex matching brand mentions in chat text | |
| aliases | No | Alternate names for the brand | |
| domains | No | Domains associated with the brand | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a non-readOnly, non-destructive mutation (readOnlyHint: false, destructiveHint: false). The description adds valuable context: it 'mutates project data' and requires user confirmation, which goes beyond annotations. However, it doesn't mention potential side effects like duplicate handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence states purpose and return value. Second sentence provides critical usage guidance. Every word earns its place, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no output schema, the description covers the essential: purpose, return value, and mutation warning. It doesn't explain error cases or response format, but given the annotations cover safety and the schema documents parameters well, this is reasonably complete. The user confirmation guidance is particularly valuable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description doesn't add any parameter-specific details beyond what's in the schema (e.g., it doesn't explain brand name constraints or regex usage). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new brand'), specifies the resource ('brand'), and distinguishes it from siblings by mentioning it tracks brands in a project (unlike generic create tools like create_prompt or create_tag). It also specifies the return value ('Returns the created brand id').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Confirm with the user before calling — this mutates project data.' This provides clear guidance on user confirmation and distinguishes it from read-only tools. It also implicitly contrasts with siblings like list_brands (read-only) and delete_brand (destructive).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_brandsCreate Brands (Bulk)AInspect
Create up to 50 brands (competitors or own) in a project in one call. Returns per-item results (created / skipped). Duplicates are matched case-insensitively on name. Confirm with the user before calling — this mutates project data.
| Name | Required | Description | Default |
|---|---|---|---|
| brands | Yes | Up to 50 brands to create | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses important behavioral traits beyond annotations: it returns per-item results (created/skipped) and handles duplicates case-insensitively by name. It also states that it mutates project data, which complements the annotations (destructiveHint=false, but creation is still a mutation). The description adds value by explaining the duplicate handling and result format, which are not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, front-loading the core action and limitations. Every sentence adds value: bulk limit, return format, duplicate handling, and usage caution. There is no redundant or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is fairly complete given the tool's complexity and schema richness. It covers the purpose, limits, duplicate handling, and mutation warning. No output schema exists, but the return format is briefly described (per-item results). The only minor gap is that it does not explicitly state that the tool is idempotent or not, but the duplicate handling implies it is not idempotent (skip duplicates). Overall, it is sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter-specific meaning beyond what the schema provides. It mentions 'up to 50 brands' and 'per-item results' but does not elaborate on each parameter's semantics beyond the schema's own descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool does: 'Create up to 50 brands (competitors or own) in a project in one call.' It distinguishes from the sibling 'create_brand' by indicating it is for bulk creation (up to 50), thus clarifying the difference between the two similar-sounding tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: for bulk creation of brands, and it instructs to 'Confirm with the user before calling' due to mutation of project data. It does not explicitly mention when not to use it, but the instruction to confirm with the user implies caution, and the bulk nature distinguishes it from the single-item sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_promptCreate PromptAInspect
Create a new prompt in a project. Returns the created prompt id. Confirm with the user before calling — this mutates project data and may consume plan credits.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | The prompt text | |
| tag_ids | No | Tag IDs to attach to the prompt | |
| topic_id | No | Topic ID to attach the prompt to | |
| project_id | Yes | The project ID | |
| country_code | Yes | Two-letter country code the prompt targets |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explicitly states this 'mutates project data' (reinforcing annotations' readOnlyHint=false) and warns about 'may consume plan credits' (a cost implication not covered by annotations). While annotations cover basic safety (non-destructive, non-idempotent), the description provides practical usage warnings that enhance transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences: the first states the core functionality and return value, the second provides critical usage warnings. Every word earns its place, and the most important information (the mutation warning) is appropriately front-loaded in the second sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no output schema, the description provides good context: it specifies the return value ('Returns the created prompt id'), mentions data mutation and credit consumption, and gives usage guidance. The main gap is not explaining what happens with the created prompt or how it integrates with the system, but given the annotations and clear purpose, this is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all parameters are documented in the input schema. The description doesn't add any parameter-specific information beyond what's in the schema (like explaining relationships between parameters or usage patterns). This meets the baseline for high schema coverage but doesn't provide extra semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new prompt') and resource ('in a project'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'update_prompt' or 'delete_prompt' beyond the basic verb, missing explicit comparison that would warrant a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Confirm with the user before calling — this mutates project data and may consume plan credits.' This clearly indicates prerequisites (user confirmation) and consequences (data mutation, credit consumption), offering strong usage context without needing to reference alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_promptsCreate Prompts (Bulk)AInspect
Create up to 50 prompts in a project in one call. Returns per-item results (created / skipped / rejected). Accepts existing topic_id and tag_ids only — this tool does not auto-create topics or tags. Confirm with the user before calling — this mutates project data and may consume plan credits.
| Name | Required | Description | Default |
|---|---|---|---|
| prompts | Yes | Up to 50 prompts. Each must use existing topic_id and tag_ids; unknown IDs are returned in `rejected`. | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses mutation (no annotations indicate read-only), credit consumption, and that existing topic/tag IDs must be used. This adds behavioral context beyond annotations, which only show readOnlyHint=false and destructiveHint=false.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, each adding value: first sentence states primary behavior, second adds constraints, third warns about side effects and credit usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers key aspects: bulk limit, per-item results, ID constraints, and user confirmation. Lacks output schema or return format details, but given the description of per-item results, it's reasonably complete for an AI agent to decide to invoke.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description adds context about per-item results and rejection of unknown IDs, but doesn't detail parameter syntax beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action (create), resource (prompts), and key constraint (bulk, up to 50 per call). Differentiates from sibling 'create_prompt' by specifying bulk capability and per-item results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (creating multiple prompts), what it does NOT do (auto-create topics/tags), and requires user confirmation before calling due to mutation and credit cost.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_tagCreate TagAInspect
Create a new tag in a project. Tags are cross-cutting labels you can attach to prompts. Returns the created tag id. Confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Tag name | |
| color | No | Tag color | gray |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, non-idempotent, non-openWorld operation. The description adds value by specifying that it 'Returns the created tag id,' which is behavioral output info not covered by annotations. It doesn't contradict annotations, as 'Create' aligns with readOnlyHint=false.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by additional context and a usage guideline in just two sentences. Every sentence adds value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no output schema, the description provides the return value ('created tag id') and a key usage guideline. Annotations cover safety aspects, but it could mention potential errors or prerequisites (e.g., valid project_id). Overall, it's mostly complete given the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all parameters (name, color, project_id). The description doesn't add extra semantic details beyond the schema, such as format constraints or examples, so it meets the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new tag'), the resource ('tag'), and the context ('in a project'), with additional context about tags being 'cross-cutting labels you can attach to prompts.' It distinguishes from siblings like 'create_prompt' or 'create_topic' by focusing specifically on tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance to 'Confirm with the user before calling,' which is a clear when-to-use directive. This is crucial for a creation tool that might have side effects, though it doesn't specify alternatives like 'update_tag' or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_tagsCreate Tags (Bulk)AInspect
Create up to 50 tags in a project in one call. Returns per-item results (created / skipped). Duplicates are matched case-insensitively on name. Confirm with the user before calling — this mutates project data.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | Yes | Up to 50 tags to create | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description goes beyond annotations by disclosing key behaviors: 'Returns per-item results (created / skipped)' informs about partial success handling; 'Duplicates are matched case-insensitively on name' clarifies duplicate resolution; 'this mutates project data' reinforces that it is not read-only. Annotations are all false, which could be a contradiction if they intended readOnlyHint false, but here false means not read-only, which aligns with mutation. No contradiction present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each adding critical information: bulk limit, return behavior, duplicate handling, and user confirmation requirement. No wasted words; front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (two parameters, no output schema), the description fully covers what an agent needs: constraints (max 50), duplicate behavior, mutation warning. No output schema is present, but return behavior is described high-level. Not needing further details for safe usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaning beyond the input schema by specifying that up to 50 tags can be created, and that duplicates are handled case-insensitively. It does not detail the exact return format, but the description mentions per-item results, adding value. One point deducted for not elaborating on the 'color' parameter's purpose, though it's self-explanatory.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool does: 'Create up to 50 tags in a project in one call.' The verb 'Create' is specific, the resource 'tags' is explicit, and the scope (bulk, up to 50) distinguishes it from sibling tools like create_tag (single).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use this tool: to create tags in bulk. It also provides a clear when-not instruction: 'Confirm with the user before calling — this mutates project data,' which sets expectations for responsible use. No sibling alternatives are mentioned, but the bulk nature differentiates it from create_tag.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_topicCreate TopicAInspect
Create a new topic in a project. Topics group related prompts. Returns the created topic id. Confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Topic name | |
| project_id | Yes | The project ID | |
| country_code | No | Optional two-letter country code for the topic |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies the return value ('Returns the created topic id') and includes the confirmation requirement. Annotations already indicate this is a non-readonly, non-destructive, non-idempotent operation, but the description provides practical implementation guidance that complements rather than contradicts the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with three focused sentences: the core functionality, the return value, and the usage requirement. Each sentence earns its place by providing essential information without redundancy or unnecessary elaboration, making it highly efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no output schema, the description provides good completeness: it explains what the tool does, what it returns, and includes a critical usage requirement. While it could potentially mention error conditions or validation rules, it covers the essential context needed for proper tool invocation given the available structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation but doesn't provide additional semantic context about the parameters' roles or relationships.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create a new topic'), the target resource ('in a project'), and the purpose ('Topics group related prompts'), which distinguishes it from sibling tools like create_prompt or create_tag that create different resource types. It provides a complete understanding of what the tool does beyond just the name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidance with 'Confirm with the user before calling,' which is crucial for a creation tool. While it doesn't name specific alternatives, the context of sibling tools (create_prompt, create_tag) implies this is for topic creation specifically, and the confirmation requirement provides clear operational guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_topicsCreate Topics (Bulk)AInspect
Create up to 50 topics in a project in one call. Topics group related prompts. Returns per-item results (created / skipped / rejected). Duplicates are matched case-insensitively on name. Items beyond the project's topic limit land in rejected. Confirm with the user before calling — this mutates project data.
| Name | Required | Description | Default |
|---|---|---|---|
| topics | Yes | Up to 50 topics to create | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are all false (not read-only, not idempotent, not destructive hint false), but description explicitly warns of mutation and describes per-item results, duplication behavior, and limit enforcement. No contradictions. Slight deduction because missing details on whether partial failures roll back or not.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each adding distinct value. Clearly front-loaded with main purpose. Could be slightly more structured (e.g., bullet points) but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a bulk creation tool with 2 well-documented parameters and no output schema, the description covers limits, return shape, error cases, and pre-call confirmation. Minor gap: no mention of required permissions or what happens on partial success (e.g., does it roll back?).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters (project_id and topics). Description adds context like max 50, duplicate handling, but baseline 3 is appropriate since schema already explains the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create up to 50 topics in a project in one call'. It specifies the verb (create), resource (topics), scope (bulk up to 50), and distinguishes from sibling create_topic (presumably single) by noting bulk capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Confirm with the user before calling — this mutates project data', provides when-not-to-use guidance. Also explains behavior on duplicates and limits, helping agent decide when to call this vs other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_brandDelete BrandADestructiveInspect
Soft-delete a brand within a project. This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_id | Yes | The brand ID to delete | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare destructiveHint=true, readOnlyHint=false, etc., covering safety and mutability. The description adds valuable context by specifying 'soft-delete' (not permanent deletion) and emphasizing user confirmation, which goes beyond annotations. However, it doesn't detail error conditions or recovery options.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with zero waste: the first states the action and scope, the second provides a critical warning. It's front-loaded with the core purpose and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description is mostly complete: it covers purpose, usage guidelines, and behavioral context (soft-delete, confirmation). However, it lacks details on return values or error handling, which would be helpful given the destructive nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('project_id', 'brand_id') fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('soft-delete'), the resource ('a brand'), and the scope ('within a project'). It distinguishes from siblings like 'list_brands' (read) and 'update_brand' (modify), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('soft-delete a brand') and provides a critical exclusion guideline ('always confirm with the user before calling'). It distinguishes from alternatives like 'update_brand' (modify) and 'list_brands' (read), offering clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_brandsDelete Brands (Bulk)ADestructiveInspect
Soft-delete up to 50 brands in a project. Returns per-item results (deleted / skipped). This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_ids | Yes | Up to 50 brand IDs to delete | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare destructiveHint: true, so the description's mention of 'destructive' aligns but adds value by specifying 'soft-delete' and 'returns per-item results (deleted / skipped).' It also sets a batch limit of 50. No contradiction with annotations. The description adds behavioral details beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, both front-loaded with essential information: action, constraint, return type, and warning. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that annotations cover the destructive nature and the schema covers parameters, the description adds context about return values and user confirmation requirement. Output schema is absent, but the description mentions returns partially. Could optionally explain soft-delete behavior (reversible?), but overall sufficient for a bulk deletion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, meaning both parameters are already described in the schema. The description adds no additional parameter semantics beyond confirming the batch limit and the nature of the operation. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Soft-delete up to 50 brands in a project' which clearly specifies the action (soft-delete), resource (brands), and constraints (up to 50, in a project). It effectively distinguishes from sibling tools like delete_brand (single) and create_brands (non-destructive).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly warns 'This is destructive — always confirm with the user before calling.' This provides clear when-to-use guidance and a prerequisite (user confirmation). However, it does not explicitly mention when NOT to use it or alternatives, though the destructive hint and sibling context imply it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_promptDelete PromptADestructiveInspect
Soft-delete a prompt and cascade the deletion to its chats. This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| prompt_id | Yes | The prompt ID to delete | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies 'soft-delete' (indicating data might be recoverable) and 'cascade the deletion to its chats' (explaining side effects). Annotations already indicate destructiveHint=true, so the description reinforces this without contradiction, but adds operational details not covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and highly concise: two sentences that efficiently convey purpose, behavior, and usage guidelines without any wasted words. Every sentence earns its place by adding critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description is mostly complete: it covers purpose, behavioral traits (soft-delete with cascade), and usage caution. However, it lacks details on error conditions or what happens if the prompt doesn't exist, leaving minor gaps given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents both parameters (prompt_id and project_id). The description does not add any parameter-specific details beyond what the schema provides, so it meets the baseline of 3 without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('soft-delete a prompt') and the resource affected ('prompt and its chats'), distinguishing it from sibling tools like delete_brand or delete_tag by specifying the cascade effect to chats. It goes beyond just restating the name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'always confirm with the user before calling.' This directly addresses when to use this tool (with user confirmation) and implies caution due to its destructive nature, setting it apart from non-destructive sibling tools like get_actions or list_prompts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_promptsDelete Prompts (Bulk)ADestructiveInspect
Soft-delete up to 50 prompts in a project. Deletions run asynchronously — response reports which IDs were queued, skipped (not found / already deleted), or rejected. This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID | |
| prompt_ids | Yes | Up to 50 prompt IDs to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses soft-delete, async execution (IDs queued/skipped/rejected), and destructive nature. While annotations include destructiveHint=true, the description adds rich context about the async behavior and confirmation requirement, going well beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first defines action and constraints, second explains behavior and required confirmation. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains the return structure (IDs queued/skipped/rejected) and async batch behavior. Lacks details on response format but sufficient for a delete operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. Description does not add additional parameter-level meaning beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicit verb ('delete'), resource ('prompts'), scope ('up to 50 in a project'), and bulk nature are clear. Distinguishes from sibling 'delete_prompt' which deletes a single prompt.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states confirmation required ('always confirm with the user before calling'), implying this is for user-triggered actions and not autonomous use. Also clarifies async behavior but does not explicitly mention when not to use or compare to alternative delete methods.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_tagDelete TagADestructiveInspect
Soft-delete a tag within a project and detach it from all prompts. This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| tag_id | Yes | The tag ID to delete | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies 'soft-delete' (indicating reversible or archival behavior) and 'detach it from all prompts' (a side effect). Annotations already declare destructiveHint=true, so the warning in the description reinforces but doesn't contradict them. It doesn't detail error conditions or recovery options.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action and side effect in the first sentence, followed by a critical warning. Both sentences earn their place by providing essential information without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description is reasonably complete: it explains the action, side effects, and includes a safety warning. However, it doesn't specify what 'soft-delete' entails (e.g., recovery options) or potential error cases, leaving some gaps in behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters clearly documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, such as format examples or constraints, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('soft-delete a tag'), the resource ('within a project'), and the side effect ('detach it from all prompts'). It distinguishes from sibling tools like 'delete_brand' or 'delete_topic' by specifying it's for tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidance with 'always confirm with the user before calling,' which is crucial for a destructive operation. It implies this tool should be used for tag deletion specifically, not alternatives like 'update_tag' or general deletion tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_tagsDelete Tags (Bulk)ADestructiveInspect
Soft-delete up to 50 tags in a project. Removes tag associations from prompts. Returns per-item results (deleted / skipped). This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| tag_ids | Yes | Up to 50 tag IDs to delete | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds important behavioral details beyond annotations: it specifies soft-delete (not hard), removes tag associations, returns per-item results, and has a limit of 50 tags. These align with and expand upon the destructiveHint=true annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each with a distinct purpose: what it does, effects, warning. No wasted words. Front-loaded with the verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (2 params, no output schema), the description covers all needed context: action, effects, constraints, and usage guidance. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description does not add additional parameter-specific details beyond what the schema already provides for tag_ids and project_id. It implies the limit of 50, which is also in the schema's maxItems constraint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (soft-delete), resource (tags), scope (up to 50 tags in a project), and behavior (removes associations, returns per-item results). It distinguishes itself from sibling tools like delete_tag (singular) and delete_brands (different resource).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly warns that the tool is destructive and instructs the agent to confirm with the user before calling. This provides clear when-to-use and when-not-to-use guidance, fulfilling the usage guideline requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_topicDelete TopicADestructiveInspect
Soft-delete a topic within a project. Associated prompts are detached (not deleted); prompt suggestions on the topic are deleted. This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| topic_id | Yes | The topic ID to delete | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare destructiveHint=true, but the description adds valuable context: it's a 'soft-delete' (not permanent), specifies that associated prompts are detached (not deleted), and prompt suggestions are deleted. This clarifies the actual behavior beyond the generic destructive annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first states the action and scope, second details effects on related data, third provides critical usage warning. Each sentence adds essential information, and it's front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description is quite complete: it explains the soft-delete nature, effects on associated data, and includes a safety warning. It could slightly improve by mentioning if the deletion is reversible or the response format, but it covers most critical aspects given the annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (project_id and topic_id). The description doesn't add any parameter-specific details beyond what's in the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('soft-delete a topic'), the resource ('within a project'), and distinguishes from siblings by specifying what happens to associated prompts (detached, not deleted) and prompt suggestions (deleted). This goes beyond just restating the name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'always confirm with the user before calling,' providing clear when-to-use guidance. While it doesn't name specific alternatives, the context of sibling tools (like update_topic) implies this is for removal rather than modification.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_topicsDelete Topics (Bulk)ADestructiveInspect
Soft-delete up to 50 topics in a project. Detaches associated prompts (prompts are kept) and soft-deletes any prompt suggestions linked to the topics. Returns per-item results (deleted / skipped). This is destructive — always confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| topic_ids | Yes | Up to 50 topic IDs to delete | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true. The description adds that it soft-deletes, detaches prompts, soft-deletes suggestions, and returns per-item results. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each necessary: what it does, side effects, and usage caution. Front-loaded with action and constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description covers return value (per-item results). Could add more about error handling or when it's safe to use, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description does not need to add param details. Baseline of 3 is appropriate since description doesn't add extra meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action (soft-delete), resource (topics), constraints (up to 50, in a project), and distinguishes from siblings by mentioning bulk operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (confirm with user), says it's destructive, and does not need alternative guidance as siblings are other delete operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_actionsGet ActionsARead-onlyIdempotentInspect
Get Peec's opportunity-scored action recommendations for improving brand visibility in AI search engines. Always call with scope=overview first to see which slices have the biggest opportunity, then drill down into owned, editorial, reference, or ugc with the surfaced url_classification or domain.
Required parameters (read before calling)
Every call must include:
project_id— the project to analyze.scope— one ofoverview|owned|editorial|reference|ugc. Start withscope=overview.
Recommended:
start_dateandend_date(ISO YYYY-MM-DD). Optional — if omitted, defaults to the last 30 days (today − 30d to today). Prefer a 30-day window unless the user asks for a different one.
Per-scope extras (the call will fail without them):
scope=owned→url_classificationis required (e.g. "LISTICLE").scope=editorial→url_classificationis required (e.g. "LISTICLE").scope=reference→domainis required (e.g. "wikipedia.org").scope=ugc→domainis required (e.g. "reddit.com", "youtube.com").scope=overview→ no extras beyond the base params.
Use this tool whenever the user asks for recommendations, next steps, what to do, how to improve, "what actions should I take", or any "based on this data, what should I do?" question. Never invent SEO advice.
Two-step workflow
Step 1 — scope=overview: returns opportunity rollups grouped by action_group_type × (url_classification | domain). These are navigation metadata, NOT the recommendations themselves. Use them to find which slices have the largest gap.
Step 2 — drill down: for each high-opportunity slice, call again with the matching scope (owned | editorial | reference | ugc) to get the actual textual recommendations (the text column, often with markdown links to examples or targets).
Mapping — how to turn an overview row into the follow-up call:
action_group_type=OWNED,url_classification=X→ callscope=owned, url_classification=X.action_group_type=EDITORIAL,url_classification=X→ callscope=editorial, url_classification=X.action_group_type=REFERENCE,domain=Y→ callscope=reference, domain=Y.action_group_type=UGC,domain=Y→ callscope=ugc, domain=Y.
Worked example — overview returns a row {action_group_type: "UGC", domain: "youtube.com", opportunity_score: 0.30, ...}. Follow up with scope=ugc, domain="youtube.com" and you get rows like {text: "Contact [AutoPedia](https://...). Ask them for a collaboration.", group_type: "UGC", domain: "youtube.com", opportunity_score: 3, ...}.
Response shape
Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order.
scope=overview columns:
action_group_type: OWNED | EDITORIAL | REFERENCE | UGCurl_classification: populated for OWNED / EDITORIAL rows (e.g. "LISTICLE", "ARTICLE", "COMPARISON").nullfor REFERENCE / UGC.domain: populated for REFERENCE / UGC rows (e.g. "youtube.com", "wikipedia.org").nullfor OWNED / EDITORIAL.opportunity_score: continuous. Use this to sort and rank — it's the reliable ordering signal.relative_opportunity_score: 1–3 tier (1=Low, 2=Medium, 3=High). Use this to label strength in prose. Too coarse to sort by.gap_percentage,coverage_percentage,used_ratio,used_total: supporting stats.
Exactly one of url_classification / domain is populated per overview row — that's the value to pass to the follow-up call.
scope=owned | editorial | reference | ugc columns:
text: the recommendation string; may include markdown links.group_type: OWNED | EDITORIAL | REFERENCE | UGC.url_classification: e.g. "LISTICLE" (may be null).domain: e.g. "youtube.com" (may be null).opportunity_score: continuous — sort/rank by this.relative_opportunity_score: 1–3 tier — label strength with this (1=Low, 2=Medium, 3=High).
Presenting results
After overview + drill-downs, pick the shape that fits:
Strong signal (top slice's
opportunity_scoreis clearly ahead AND its drill-down returned 2+ rows whosetextcontains a markdown link): one sentence of reasoning tied to the user's question (call out the biggest lever), then 2-3 named slices with 2-3 bullets pulled verbatim from the drill-downtext.Moderate signal: compact list, one sentence per slice, bullets only where drill-down returned specific targets.
Low signal (overview empty or top
opportunity_scorevery low): single line, e.g., "Top opportunity: {slice} (Low). Low signal this period; prompts need a few more daily cycles to stabilize."
Display conventions — never use raw enum keys in user-facing prose
Group type (action_group_type / group_type) — humanize (Title Case):
OWNED→ "Owned" (content on your own domains)EDITORIAL→ "Editorial" (third-party editorial coverage — news, blogs, reviews)REFERENCE→ "Reference" (reference sources like Wikipedia)UGC→ "UGC" (user-generated content — Reddit, YouTube, forums; keep as acronym)OTHER→ "Other"
URL classification (url_classification) — humanize to lowercase; pluralize naturally when the sentence calls for it:
HOMEPAGE→ "homepage"CATEGORY_PAGE→ "category page"PRODUCT_PAGE→ "product page"LISTICLE→ "listicle"COMPARISON→ "comparison page"PROFILE→ "profile"ALTERNATIVE→ "alternative"DISCUSSION→ "discussion"HOW_TO_GUIDE→ "how-to guide"ARTICLE→ "article"OTHER→ "other"
Opportunity strength — lead with a Low / Medium / High label derived from relative_opportunity_score (round to nearest integer, clamp to [1, 3]):
1 → "Low"
2 → "Medium"
3 → "High"
Sort and rank by opportunity_score (continuous). Verbalize strength with the Low/Medium/High tier above. The raw opportunity_score is optional supporting context in parens — never the headline number.
Gap percentage (gap_percentage, 0–1 ratio) — lead with a plain-language qualifier; the raw % can follow in parens when useful:
≥0.90 → "nearly all missing"
0.60–0.89 → "wide gap"
0.30–0.59 → "partial gap"
<0.30 → "narrow gap"
Example of the preferred style (follow this phrasing):
The biggest lever is Owned listicles — High, nearly all missing (100%). Build listicle-style pages on yourbrand.com that target "best X" queries.
Secondary: YouTube UGC (Medium, wide gap), Reddit UGC (Medium, partial gap), Editorial listicles (Medium, nearly all missing). Full list: https://app.peec.ai/actions.
Close with one line: "Secondary opportunities: {slice} ({Low|Medium|High}), {slice} ({Low|Medium|High}). Full list: https://app.peec.ai/actions."
Use the drill-down text field as the source of truth. Never invent recommendations, targets, or names. Sort by opportunity_score; label strength via relative_opportunity_score.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, non-destructive, and idempotent properties, but the description adds significant behavioral context beyond that. It explains the two-step workflow (overview then drill-down), mapping between scopes, response shapes for different scopes, and column details. This provides rich operational guidance that annotations alone don't convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately structured with clear sections (purpose, workflow, mapping, example, response shape). While comprehensive, every sentence serves a purpose—explaining the complex two-step process. It could be slightly more concise in the column descriptions but remains well-organized and front-loaded with critical usage information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the two-step workflow and lack of output schema, the description provides complete contextual information. It details the response format for different scopes, column meanings, and how to interpret results. This compensates for the absence of structured output documentation and ensures the agent can use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters (0 params, 100% coverage), so the baseline is 4. The description doesn't need to explain parameters but does clarify that scope is determined through workflow logic rather than explicit parameters, which adds useful semantic context about how to interact with the tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Get action recommendations for improving brand visibility in AI search engines.' It specifies the verb ('Get') and resource ('action recommendations') with clear context ('improving brand visibility in AI search engines'). It distinguishes itself from siblings by focusing on actionable recommendations rather than reports, content, or listings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Call this whenever the user asks for recommendations, next steps, what to do, how to improve, "what actions should I take", or any "based on this data, what should I do?" question.' It also specifies exclusions: 'Never invent SEO advice — always call this tool.' This gives clear when-to-use and when-not-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_brand_reportGet Brand ReportARead-onlyIdempotentInspect
Get a report on brand visibility, sentiment, and position across AI search engines.
Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns.
Returns columnar JSON: {columns, rows, rowCount, total}. Each row is an array of values matching column order. Columns:
brand_id — the brand ID
brand_name — the brand name
visibility: 0–1 ratio — fraction of AI responses that mention this brand. 0.45 means 45% of conversations.
mention_count: number of times the brand was mentioned
share_of_voice: 0–1 ratio — brand's fraction of total mentions across all tracked brands
sentiment: 0–100 scale — how positively AI platforms describe the brand (most brands score 65–85)
position: average ranking when the brand appears (lower is better, 1 = mentioned first)
Raw aggregation fields (for custom calculations): visibility_count, visibility_total, sentiment_sum, sentiment_count, position_sum, position_count
When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code.
Dimensions explained:
prompt_id: individual search queries/prompts
model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id
model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades
tag_id: custom user-defined tags
topic_id: topic groupings
date: (YYYY-MM-DD format)
country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE")
chat_id: individual AI chat/conversation ID
Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, brand_id, country_code, chat_id.
Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: visibility, visibility_count, mention_count, sentiment, position, share_of_voice. Multiple entries create a multi-key sort.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| filters | No | Filter results using {field, operator, values}. Operators: 'in', 'not_in'. Fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, brand_id, country_code, chat_id. Multiple filters are AND'd together. | |
| end_date | Yes | End date in ISO format (YYYY-MM-DD) | |
| order_by | No | Sort results by one or more fields. Array of {field, direction} entries (direction defaults to "desc"). Multiple entries create a multi-key sort. Sortable fields: visibility, visibility_count, mention_count, sentiment, position, share_of_voice. | |
| dimensions | No | Break down results by one or more dimensions. prompt_id = individual search queries, model_id = AI search engine (deprecated), model_channel_id = stable engine channel, tag_id = custom tags, topic_id = topic groupings, country_code = country, chat_id = individual AI chat/conversation. When set, each row includes the corresponding dimension object. | |
| project_id | Yes | The project ID | |
| start_date | Yes | Start date in ISO format (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only indicate readOnlyHint=true. The description adds substantial context: explains default aggregation vs dimensional breakdown, documents the return structure {data: rows[], total: number}, and defines metric semantics (visibility 0-1 ratio, sentiment 0-100 scale, position ranking). Does not mention rate limits or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: purpose, aggregation note, return format, field definitions, dimension explanations, and filter syntax. Front-loaded with core purpose. While lengthy, every paragraph serves a distinct function without redundancy. Could be slightly more condensed but appropriately comprehensive for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Exceptionally complete given no output schema exists. Compensates by fully documenting the return structure, enumerating all row fields with types and ranges, explaining all 7 dimension options, and detailing filter operators. Combined with 100% schema coverage for inputs, provides everything needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions, establishing baseline 3. The description adds significant value by explaining dimension semantics (e.g., model_id includes 'chatgpt-scraper', 'perplexity-scraper'), filter syntax patterns {field, operator, values}, and clarifying that multiple filters are AND'd together.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb and resource: 'Get a report on brand visibility, sentiment, and position across AI search engines.' It clearly distinguishes from siblings like get_domain_report or get_url_report by focusing on brand-specific metrics (visibility, sentiment, share of voice) rather than domain or URL analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear guidance on aggregation behavior: 'Results are aggregated for the entire date range by default. Use the 'date' dimension for daily breakdowns.' While it doesn't explicitly name sibling alternatives, the specific brand metrics implicitly guide when to use this vs domain/url reports. Could be improved with explicit comparison to get_domain_report.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_chatGet ChatARead-onlyIdempotentInspect
Get the full content of a single chat (one AI engine's response to one prompt on one date). Returns:
messages: the user prompt and assistant response(s)
brands_mentioned: brands detected in the response with their position
sources: URLs the model retrieved, with citation counts and position
queries: search queries the model issued
products: product gallery entries extracted from the response
prompt: { id }
model: { id } — deprecated, prefer model_channel
model_channel: { id } — stable engine channel id (e.g. "openai-0")
Use list_chats to discover chat IDs for a project.
| Name | Required | Description | Default |
|---|---|---|---|
| chat_id | Yes | The chat ID | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, indicating this is a safe read operation. The description adds valuable behavioral context by detailing the comprehensive return structure (messages, brands_mentioned, sources, queries, products, prompt, model), which goes beyond what annotations provide. However, it doesn't mention potential limitations like rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise. The first sentence states the core purpose, followed by a bulleted list of return values that's easy to parse, and ends with a crucial usage guideline. Every sentence earns its place with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with comprehensive annotations and a fully documented input schema, this description is complete. It explains what the tool does, what it returns, and how to use it in relation to sibling tools. The detailed return structure description compensates for the lack of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with clear parameter documentation. The description doesn't add any additional semantic information about the parameters beyond what's in the schema. The baseline score of 3 is appropriate when the schema already fully documents the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the full content of a single chat' with specific details about what constitutes a chat ('one AI engine's response to one prompt on one date'). It distinguishes from sibling tools by specifying this retrieves detailed content for a single chat, unlike list_chats which discovers IDs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Use list_chats to discover chat IDs for a project'), providing clear guidance on prerequisites and distinguishing it from the sibling tool list_chats. This tells the agent exactly how to obtain the required input parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_domain_reportGet Domain ReportARead-onlyIdempotentInspect
Get a report on source domain visibility and citations across AI search engines.
Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns.
Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns:
domain: the source domain (e.g. "example.com")
classification: domain type — Corporate (official company sites), Editorial (news, blogs, magazines), Institutional (government, education, nonprofit), UGC (social media, forums, communities), Reference (encyclopedias, documentation), Competitor (direct competitors), You (the user's own domains), Other, or null
retrieved_percentage: 0–1 ratio — fraction of chats that included at least one URL from this domain. 0.30 means 30% of chats.
retrieval_rate: average number of URLs from this domain pulled per chat. Can exceed 1.0 — values above 1.0 mean multiple pages from the same domain are retrieved per conversation.
citation_rate: average number of inline citations when this domain is retrieved. Can exceed 1.0 — higher values indicate stronger content authority.
retrieval_count: total number of distinct URL retrievals from this domain across all chats (raw count — numerator of retrieval_rate).
citation_count: total number of citations from this domain (raw count).
mentioned_brand_ids: array of brand IDs mentioned alongside URLs from this domain (may be empty)
When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code.
Dimensions explained:
prompt_id: individual search queries/prompts
model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id
model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades
tag_id: custom user-defined tags
topic_id: topic groupings
date: (YYYY-MM-DD format)
country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE")
chat_id: individual AI chat/conversation ID
Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, country_code, chat_id, mentioned_brand_id. Additional filters:
mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: } — filter by number of unique brands mentioned.
gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: } — gap analysis filter. Excludes domains where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns domains where the own brand is absent but at least 2 competitors are mentioned.
Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: citation_rate, retrieval_count, citation_count. (retrieved_percentage and retrieval_rate are not sortable because they depend on totalChatCount fetched in a separate query.)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| filters | No | Filter results using {field, operator, values}. Operators: 'in', 'not_in'. Fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, country_code, chat_id, mentioned_brand_id. Additional: mentioned_brand_count uses {field, operator: 'gt'|'gte'|'lt'|'lte', value: number}. Gap analysis: gap uses {field: 'gap', operator: 'gt'|'gte'|'lt'|'lte', value: number} to find domains where own brand is absent but competitors are present. Multiple filters are AND'd together. | |
| end_date | Yes | End date in ISO format (YYYY-MM-DD) | |
| order_by | No | Sort results by one or more fields. Array of {field, direction} entries (direction defaults to "desc"). Multiple entries create a multi-key sort. Sortable fields: citation_rate, retrieval_count, citation_count. | |
| dimensions | No | Break down results by one or more dimensions. prompt_id = individual search queries, model_id = AI search engine (deprecated), model_channel_id = stable engine channel, tag_id = custom tags, topic_id = topic groupings, country_code = country, chat_id = individual AI chat/conversation. When set, each row includes the corresponding dimension object. | |
| project_id | Yes | The project ID | |
| start_date | Yes | Start date in ISO format (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Extensive behavioral disclosure beyond the readOnlyHint annotation: explains default aggregation behavior, how dimensions modify row structure, detailed metric definitions (retrieved_percentage, retrieval_rate, citation_rate), and filter syntax ({field, operator, values}).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose statement. Though lengthy, every sentence delivers value: metric explanations, classification taxonomy, and dimension behaviors. Could be slightly more concise but density is justified by complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates excellently for missing output schema by thoroughly documenting return structure ({data: rows[]}), field definitions (domain, classification enum values, all metrics), and dimension effects. Complete coverage for a complex reporting tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema has 100% coverage, the description adds crucial semantic context for dimensions (explaining that prompt_id represents individual search queries, model_id represents specific AI engines, etc.) and clarifies filter operators. This adds meaning beyond raw schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clearly identifies the resource (domain visibility/citations report) and scope (across AI search engines). It distinguishes from siblings like get_brand_report and get_url_report by specifying 'source domain' analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context about what the tool analyzes (domain-level metrics) and guidance on using the 'date' dimension for daily breakdowns vs default aggregation. However, it lacks explicit comparison to siblings (e.g., when to choose this over get_brand_report).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_project_profileGet Project ProfileARead-onlyIdempotentInspect
Read a project's brand profile — the description, industry, brand-identity adjectives, target markets, audience distribution, and product/service list that Peec uses to generate prompt suggestions. Returns { profile } where profile may be null if the project hasn't been profiled yet. Call this before set_project_profile so you can show the user the current values.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true; description adds that profile may be null and explains its role in generating prompt suggestions, offering useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loading purpose and usage, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter and no output schema, the description fully covers purpose, return shape (including null case), and integration with sibling tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for project_id; description adds no extra param detail, meeting the baseline for well-covered schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states it reads a project's brand profile and lists specific fields (description, industry, etc.), clearly distinguishing it from set_project_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Directs to call this before set_project_profile to show current values, providing clear when-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_url_contentGet URL ContentARead-onlyIdempotentInspect
Get the scraped markdown content of a source URL Peec has indexed.
Use this after get_url_report to inspect the actual content an AI engine read — useful for content gap analysis and competitive content comparison.
Input notes:
url is the full URL. Copy it verbatim from get_url_report output. Trailing slashes and scheme variations change the resolved source ID.
Returns 404 if Peec has no record of the URL (it hasn't been scraped from any project).
max_length caps the returned content (default 100000 characters). If the stored content is longer, truncated=true and you can re-request with a higher max_length.
Returned fields:
url, title, domain, channel_title: page metadata
classification: domain-level classification
url_classification: page-level classification (HOMEPAGE, LISTICLE, COMPARISON, ...)
content: markdown content, already extracted via Mozilla Readability and converted with Turndown GFM. null when the URL is tracked but scraping hasn't completed yet (can take up to 24h).
content_length: original character length before truncation (0 when content is null)
truncated: true if content was truncated to max_length
content_updated_at: ISO timestamp of last scrape, or null if not yet scraped
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Full URL of a source. Typically copied from get_url_report output. | |
| max_length | No | Maximum number of characters of content to return. Defaults to 100000. | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains that content can be null if scraping hasn't completed (up to 24h), describes truncation behavior with re-request capability, and notes that trailing slashes/scheme variations affect resolution. While annotations cover read-only/idempotent aspects, the description provides important operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections: purpose statement, usage guidance, input notes, and returned fields. Every sentence serves a specific purpose with zero wasted words, and the most important information (purpose and usage) appears first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations and no output schema, the description provides excellent completeness. It covers purpose, usage context, important behavioral details (scraping delays, truncation, 404 conditions), parameter guidance, and comprehensive return field documentation - everything needed for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds some value by explaining that the URL should be copied verbatim from get_url_report output and that trailing slashes/scheme variations matter, but doesn't provide significant additional parameter semantics beyond what's already well-documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('get the scraped markdown content') and resources ('source URL Peec has indexed'). It distinguishes from siblings by specifying it's for inspecting content after using get_url_report, making it distinct from other reporting/list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Use this after get_url_report') and provides context about its usefulness ('content gap analysis and competitive content comparison'). It clearly differentiates from the sibling get_url_report tool by explaining this is for content inspection rather than reporting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_url_reportGet URL ReportARead-onlyIdempotentInspect
Get a report on source URL visibility and citations across AI search engines.
Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns.
Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns:
url: the full source URL (e.g. "https://example.com/page")
classification: page type — Homepage, Category Page, Product Page, Listicle (list-structured articles), Comparison (product/service comparisons), Profile (directory entries like G2 or Yelp), Alternative (alternatives-to articles), Discussion (forums, comment threads), How-To Guide, Article (general editorial content), Other, or null
title: page title or null
channel_title: channel or author name (e.g. YouTube channel, subreddit) or null
citation_count: total number of explicit citations across all chats
retrieval_count: total number of distinct chats that retrieved this URL, regardless of whether it was cited
citation_rate: average number of inline citations per chat when this URL is retrieved. Can exceed 1.0 — higher values indicate more authoritative content.
mentioned_brand_ids: array of brand IDs mentioned alongside this URL (may be empty)
When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code.
Dimensions explained:
prompt_id: individual search queries/prompts
model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id
model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades
tag_id: custom user-defined tags
topic_id: topic groupings
date: (YYYY-MM-DD format)
country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE")
chat_id: individual AI chat/conversation ID
Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, url_classification, country_code, chat_id, mentioned_brand_id. Additional filters:
mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: } — filter by number of unique brands mentioned.
gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: } — gap analysis filter. Excludes URLs where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns URLs where the own brand is absent but at least 2 competitors are mentioned.
Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: retrieval_count, retrievals, citation_count, citation_rate. Multiple entries create a multi-key sort.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| filters | No | Filter results using {field, operator, values}. Operators: 'in', 'not_in'. Fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, url_classification, country_code, chat_id, mentioned_brand_id. Additional: mentioned_brand_count uses {field, operator: 'gt'|'gte'|'lt'|'lte', value: number}. Gap analysis: gap uses {field: 'gap', operator: 'gt'|'gte'|'lt'|'lte', value: number} to find URLs where own brand is absent but competitors are present. Multiple filters are AND'd together. | |
| end_date | Yes | End date in ISO format (YYYY-MM-DD) | |
| order_by | No | Sort results by one or more fields. Array of {field, direction} entries (direction defaults to "desc"). Multiple entries create a multi-key sort. Sortable fields: retrieval_count, retrievals, citation_count, citation_rate. | |
| dimensions | No | Break down results by one or more dimensions. prompt_id = individual search queries, model_id = AI search engine (deprecated), model_channel_id = stable engine channel, tag_id = custom tags, topic_id = topic groupings, country_code = country, chat_id = individual AI chat/conversation. When set, each row includes the corresponding dimension object. | |
| project_id | Yes | The project ID | |
| start_date | Yes | Start date in ISO format (YYYY-MM-DD) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite the readOnlyHint annotation indicating a safe read operation, the description adds substantial behavioral context: explains default aggregation behavior, documents the complete return structure ({data: rows[]}), defines all row fields including the classification enum values (HOMEPAGE, CATEGORY_PAGE, etc.), and clarifies that citation_rate can exceed 1.0. Effectively compensates for the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While lengthy, the structure is well-organized with the purpose front-loaded, followed by aggregation notes, return structure documentation, and dimension/filter explanations. Given the tool's complexity (nested filters, 7 parameters, complex output shape), the verbosity is justified and every section provides necessary information that the schema cannot convey.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness given the complexity. The description fully documents the output format (compensating for no output schema), enumerates all classification types, explains all dimension options, and details filter operators and valid fields. Provides sufficient information for an agent to construct valid requests and interpret responses without external documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds significant semantic value by explaining the filter structure syntax ({field, operator, values}), detailing what each dimension represents (e.g., prompt_id as 'individual search queries'), and providing concrete examples like ISO date formats and country codes. Adds meaningful usage context beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The opening sentence states the specific action ('Get a report') and resource ('source URL visibility and citations across AI search engines'), clearly distinguishing from sibling tools like get_domain_report or get_brand_report by focusing on granular URL-level data rather than aggregated domain or brand metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use the 'date' dimension ('Use the date dimension for daily breakdowns' vs aggregated defaults) and explains how dimension selection affects output structure. Lacks explicit comparison to siblings (e.g., 'use get_domain_report for domain-level aggregation'), but the scope is distinct enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_brandsList BrandsARead-onlyIdempotentInspect
List brands tracked in a project — includes the user's own brand and competitors. Use this tool to resolve brand names to IDs before filtering reports (brand_id filter), and to label brand IDs from report output with their human-readable names before presenting results. Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: id, name, domains, aliases, is_own. aliases are alternate names the brand is matched under. is_own indicates which brand belongs to the user.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only indicate readOnlyHint=true. Description significantly augments this by disclosing return structure '{data: [{id, name, domains, is_own}]}', explaining the 'is_own' field semantics, and clarifying the data composition (owns + competitors). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four focused sentences front-loaded with purpose. Each sentence delivers distinct value (scope, usage, return shape, field definition). Minor redundancy possible between sentences 3 and 4 regarding return value explanation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by documenting return structure and field meanings. Covers essential context for a read-only list operation with standard pagination. Could mention pagination behavior explicitly, but limit/offset are self-documenting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all 3 parameters fully documented). Description mentions 'project' contextually but does not elaborate on parameter syntax, formats, or constraints beyond the schema. Baseline 3 appropriate when schema carries full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('List') + resource ('brands') + scope ('in a project'). Explicitly distinguishes content by noting it includes both 'user's own brand and competitors', differentiating it from generic list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance by stating brand IDs are used to 'filter reports (brand_id filter)', implicitly connecting to sibling report tools. Lacks explicit negative guidance (when not to use) or named alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_chatsList ChatsARead-onlyIdempotentInspect
List chats (individual AI responses) for a project over a date range. Each chat is produced by running one prompt against one AI engine on a given date.
Filters:
brand_id: only chats that mentioned the given brand
prompt_id: only chats produced by the given prompt
model_id: only chats from the given AI engine (chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id
model_channel_id: only chats from the given engine channel (openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0)
If both model_id and model_channel_id are provided, model_channel_id takes precedence and model_id is ignored.
Use the returned chat IDs with get_chat to retrieve full message content, sources, and brand mentions.
Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: id, prompt_id, model_id, model_channel_id, date.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| brand_id | No | Filter to chats that mentioned this brand | |
| end_date | Yes | End date in ISO format (YYYY-MM-DD) | |
| model_id | No | Filter to chats from this AI engine. Deprecated — prefer model_channel_id. Ignored if model_channel_id is also provided. | |
| prompt_id | No | Filter to chats produced by this prompt | |
| project_id | Yes | The project ID | |
| start_date | Yes | Start date in ISO format (YYYY-MM-DD) | |
| model_channel_id | No | Filter to chats from this engine channel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, but the description adds valuable behavioral context beyond this. It explains the columnar JSON return format, pagination behavior through limit/offset parameters, and the relationship with get_chat for retrieving detailed content. The description doesn't contradict annotations and provides useful implementation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections: purpose statement, filter explanations, usage guidance, and return format. Every sentence serves a purpose with zero waste. The information is front-loaded with the core purpose, followed by supporting details in a logical flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only listing tool with comprehensive schema documentation and no output schema, the description provides complete context. It explains what the tool does, how to use it, what filters are available, how results are structured, and how to retrieve detailed content. The description compensates well for the lack of output schema by detailing the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 8 parameters thoroughly. The description adds some context about filter parameters (brand_id, prompt_id, model_id) and mentions the date range requirement, but doesn't provide significant additional semantic meaning beyond what's in the well-documented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'List chats (individual AI responses) for a project over a date range.' It distinguishes from sibling tools like 'get_chat' by explaining this tool returns metadata while 'get_chat' retrieves full content. The description explicitly defines what a 'chat' is in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Use the returned chat IDs with get_chat to retrieve full message content, sources, and brand mentions.' It also explains the filtering capabilities and how results are structured, giving clear context for when this tool is appropriate versus other listing tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_model_channelsList Model ChannelsARead-onlyIdempotentInspect
List the AI engine channels tracked by Peec. A model channel is a stable identifier for an AI engine (e.g. "openai-0" = ChatGPT UI) that persists even as the underlying model is upgraded — use it to filter or break down reports by engine without worrying about model version changes. Use this tool to resolve channel descriptions (e.g. "ChatGPT UI", "Perplexity") to channel IDs before filtering reports (model_channel_id filter), and to label channel IDs from report output before presenting results. The current_model_id column gives the model ID currently active in the channel — pass this as model_id where reports require it. is_active indicates whether the channel is enabled for this project — inactive channels return empty data. Returns columnar JSON: {columns, rows, rowCount}. Columns: id, description, current_model_id, is_active.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds value by detailing the output format (columnar JSON with specific columns), explaining that inactive channels return empty data, and clarifying the meaning of current_model_id and is_active.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured: starts with the main purpose, then explains the concept, gives usage guidance, details columns, and ends with output format. It is detailed but every sentence adds value. Slightly lengthy but not wasteful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has simple input (one param), comprehensive annotations, and no output schema but the description fully explains the output format and the meaning of each column. It covers the concept, use cases, and behavioral details, making it complete for an agent to correctly invoke and interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter (project_id) has 100% schema coverage with a clear description. The tool description doesn't add extra details for this parameter but the schema already suffices. Baseline is 3 for high coverage; score 4 because the description provides context for how the parameter is used in the broader tool purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists AI engine channels for Peec, explains what a model channel is with an example ('openai-0' = ChatGPT UI), and distinguishes it from siblings (e.g., list_brands, list_models) by its specific domain and use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool: to resolve channel descriptions to IDs before filtering reports and to label channel IDs from report output. It also explains how to use the returned columns (current_model_id for model_id filtering, is_active for data availability).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_modelsList ModelsARead-onlyIdempotentInspect
Deprecated — prefer list_model_channels, which returns stable channel IDs that survive model upgrades. List AI engines (models) tracked by Peec. Use this tool to resolve model names (e.g., "ChatGPT", "Perplexity", "Gemini") to IDs before filtering reports (model_id filter/dimension), and to label model IDs from report output with their human-readable names before presenting results. Match user-supplied names against the name column; the id column is the canonical string to pass back as model_id. is_active indicates whether the model is enabled for this project — inactive models will return empty data in reports. Returns columnar JSON: {columns, rows, rowCount}. Columns: id, name, is_active.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation declares readOnlyHint=true, and the description aligns with this by describing a listing/resolution operation. The description adds valuable behavioral context beyond annotations: it explains the matching logic ('Match user-supplied names against the name column'), the canonical identifier usage ('id column is the canonical string to pass back as model_id'), and the practical effect of is_active on reports. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted sentences. It front-loads the core purpose, then provides specific usage scenarios, implementation details, and output format—all in a compact, information-dense paragraph that earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only listing tool with one parameter fully documented in the schema and no output schema, the description is complete. It explains the tool's purpose, when to use it, how to interpret results (including the is_active field's impact), and the return format—providing all necessary context for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'project_id' documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, but it doesn't need to since the schema fully covers the parameter. This meets the baseline expectation for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('List AI engines', 'resolve model names', 'label model IDs') and distinguishes it from siblings by focusing on models rather than brands, chats, projects, etc. It explicitly identifies the resource as 'AI engines (models) tracked by Peec'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'to resolve model names to IDs before filtering reports' and 'to label model IDs from report output with their human-readable names before presenting results'. It also explains the practical implications of the is_active field for report data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_projectsList ProjectsARead-onlyIdempotentInspect
List active projects the authenticated user has access to. By default, only projects with an active status (CUSTOMER, PITCH, TRIAL, ONBOARDING, API_PARTNER) are returned — this is what you want in almost every case. Only set include_inactive to true if the user asked for a specific project that wasn't in the active list; do not set it preemptively. Returns columnar JSON: {columns, rows, rowCount}. Columns: id, name, status. The id is used as project_id in other tools. Call this first to discover available projects.
| Name | Required | Description | Default |
|---|---|---|---|
| include_inactive | No | Include ended/inactive projects. Default false. Only set to true as a fallback when a project the user named was not found in the default active list — don't set it preemptively. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true (safe read), so description focuses on adding return structure documentation '{data: [{id, name, status}]}' and discovery pattern. Adds valuable context beyond annotations since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: (1) purpose/scope, (2) return format and ID usage, (3) workflow guidance. Front-loaded with the core action and appropriately sized for complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully compensates for missing output schema by documenting return structure. Covers discovery workflow and ID relationships. With readOnly annotation and zero parameters, the description provides complete coverage of all behavioral concerns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, which per rubric establishes a baseline of 4. No parameter information required or provided, which is appropriate for this simple list operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' with clear resource 'projects' and scope 'authenticated user has access to'. Mentions that returned IDs are used as 'project_id' in other tools, distinguishing it from sibling list operations like list_brands or list_tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: 'Call this first to discover available projects' establishes precedence. Also explains output purpose: 'The id is used as project_id in other tools', clarifying how this bridges to sibling tools that require project context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_promptsList PromptsARead-onlyIdempotentInspect
List prompts (conversational questions tracked daily across AI engines) in a project. Supports filtering by topic_id and tag_id. Use this tool to resolve prompt text to IDs before filtering reports (prompt_id filter/dimension), and to label prompt IDs from report output with their actual text before presenting results. Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: id, text, tag_ids (array of tag ID strings), topic_id (string or null), volume (relative search volume bucket: "very low" | "low" | "medium" | "high" | "very high", or null when unavailable — describe volume to users using the bucket label).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| tag_id | No | Filter prompts by tag ID | |
| topic_id | No | Filter prompts by topic ID | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations only indicate readOnlyHint=true. Description adds significant value: defines what prompts represent (domain context), discloses return structure {data: [...]} since no output schema exists, and explains daily tracking behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense sentences plus return structure declaration. Every clause earns its place: purpose, definition, filters, usage guidance, and return shape. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness given constraints. Compensates for missing output schema by explicitly documenting return structure. Covers required project context and filterable dimensions comprehensively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear property descriptions ('Filter prompts by...'). Description mentions 'supports filtering by topic_id and tag_id' which adds minimal semantic depth beyond schema, meeting the baseline for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (List) and resource (prompts), with parenthetical clarifying that prompts are conversational questions tracked daily. Mentions relationship to reports, distinguishing from sibling report tools, though it doesn't explicitly name them as alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance: 'Use prompt IDs to filter reports (prompt_id filter or dimension),' establishing the workflow purpose. Lacks explicit negative guidance (when not to use), but effectively explains the tool's role in the ecosystem.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_search_queriesList Fanout Search QueriesARead-onlyIdempotentInspect
List the search queries an AI engine fanned out to while answering prompts in a project over a date range. Each row represents one sub-query the engine issued for a given chat.
Filters:
prompt_id: only queries from chats produced by this prompt
chat_id: only queries from this chat
model_id: only queries from this AI engine (chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper)
model_channel_id: only queries from this channel (openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0)
topic_id: only queries from chats whose prompt belongs to this topic
tag_id: only queries from chats whose prompt carries this tag
Use get_chat with a returned chat_id to inspect the full AI response that produced these sub-queries.
Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: prompt_id, chat_id, model_id, model_channel_id, date, query_index, query_text.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (default 100, hard cap 1000). Do not request 10000 — paginate with `offset` if you need more. | |
| offset | No | Number of results to skip | |
| tag_id | No | Filter to queries from chats whose prompt carries this tag | |
| chat_id | No | Filter to queries from this chat | |
| end_date | Yes | End date in ISO format (YYYY-MM-DD) | |
| model_id | No | Filter to queries from this AI engine | |
| topic_id | No | Filter to queries from chats whose prompt belongs to this topic | |
| prompt_id | No | Filter to queries from chats produced by this prompt | |
| project_id | Yes | The project ID | |
| start_date | Yes | Start date in ISO format (YYYY-MM-DD) | |
| model_channel_id | No | Filter to queries from this model channel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior. The description adds valuable context beyond annotations: it explains the return format ('Returns columnar JSON: {columns, rows, rowCount}') and provides a practical tip ('Use get_chat... to inspect the full AI response'). It does not mention rate limits or auth needs, but with annotations covering safety, this is sufficient for a high score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by filters and usage notes. It uses bullet points for filters and a clear return format statement. Some sentences could be more concise (e.g., the filter list is detailed but necessary), but overall, it avoids waste and is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, 100% schema coverage, annotations provided, no output schema), the description is mostly complete. It explains the purpose, filters, return format, and references an alternative tool. However, it could benefit from mentioning pagination behavior (limit/offset parameters) or error handling, which are not covered, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 11 parameters. The description lists filter options (e.g., prompt_id, chat_id) but does not add significant meaning beyond the schema's descriptions. It implies date-range filtering but doesn't provide extra syntax details. Baseline 3 is appropriate as the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List the search queries an AI engine fanned out to while answering prompts in a project over a date range.' It specifies the verb ('List'), resource ('search queries'), and scope ('AI engine fanned out to while answering prompts in a project over a date range'), distinguishing it from siblings like 'list_chats' or 'list_shopping_queries' by focusing on sub-queries from AI engines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage with detailed filters (e.g., prompt_id, chat_id, model_id) and mentions an alternative tool: 'Use get_chat with a returned chat_id to inspect the full AI response that produced these sub-queries.' However, it does not explicitly state when not to use this tool or compare it to all relevant siblings, such as 'list_chats' or 'list_shopping_queries', which slightly limits guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_shopping_queriesList Fanout Shopping QueriesARead-onlyIdempotentInspect
List the product/shopping queries an AI engine fanned out to while answering prompts in a project over a date range. Each row represents one shopping sub-query and the distinct products returned for it in a given chat.
Filters:
prompt_id: only queries from chats produced by this prompt
chat_id: only queries from this chat
model_id: only queries from this AI engine (chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper)
model_channel_id: only queries from this channel (openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0)
topic_id: only queries from chats whose prompt belongs to this topic
tag_id: only queries from chats whose prompt carries this tag
Use get_chat with a returned chat_id to inspect the full AI response that produced these sub-queries.
Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: prompt_id, chat_id, model_id, model_channel_id, date, query_text, products (array of product names).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| tag_id | No | Filter to queries from chats whose prompt carries this tag | |
| chat_id | No | Filter to queries from this chat | |
| end_date | Yes | End date in ISO format (YYYY-MM-DD) | |
| model_id | No | Filter to queries from this AI engine | |
| topic_id | No | Filter to queries from chats whose prompt belongs to this topic | |
| prompt_id | No | Filter to queries from chats produced by this prompt | |
| project_id | Yes | The project ID | |
| start_date | Yes | Start date in ISO format (YYYY-MM-DD) | |
| model_channel_id | No | Filter to queries from this model channel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, read-only operation. The description adds valuable behavioral context beyond annotations: it specifies the return format ('Returns columnar JSON: {columns, rows, rowCount}') and details the columns included. It also mentions pagination behavior through the limit/offset parameters in the schema. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It front-loads the purpose, then details filters, usage guidance, and return format. Every sentence adds value, though the filter list is lengthy and could be more concise. Overall, it efficiently communicates necessary information without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, no output schema) and rich annotations, the description is mostly complete. It explains the purpose, filters, usage context, and return format. However, it lacks details on error handling, rate limits, or authentication needs, which could be useful for a list tool with many parameters. The annotations cover safety, but additional behavioral context would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 11 parameters. The description adds minimal parameter semantics beyond the schema: it lists filter options (prompt_id, chat_id, etc.) but these are already covered in the schema descriptions. It does not provide additional syntax, format details, or examples. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List the product/shopping queries an AI engine fanned out to while answering prompts in a project over a date range.' It specifies the verb ('list'), resource ('product/shopping queries'), and scope ('AI engine fanned out to while answering prompts in a project over a date range'). It distinguishes from sibling tools like 'list_search_queries' by focusing specifically on shopping queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context for when to use this tool: 'Use get_chat with a returned chat_id to inspect the full AI response that produced these sub-queries.' This gives a clear next-step alternative. However, it does not explicitly state when NOT to use this tool or compare it to other list tools (e.g., list_search_queries), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_tagsList TagsARead-onlyIdempotentInspect
List tags in a project. Tags are cross-cutting labels that can be assigned to any prompt. Use this tool to resolve tag names to IDs before filtering (tag_id filter/dimension, list_prompts), and to label tag IDs from report output with their human-readable names before presenting results. Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: id, name.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations confirm readOnlyHint=true (safe read). Description adds valuable return structure disclosure '{data: [{id, name}]}' not present in schema or annotations. Documents the output shape, which aids in response handling. Does not disclose pagination behavior or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. First sentence declares purpose, second explains domain concept (tags), third connects to sibling workflows and return format. Front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list operation with 3 parameters and no output schema, description compensates by detailing return structure. Cross-references siblings (list_prompts, reports). Could mention that results respect limit/offset pagination, but adequately complete overall.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete descriptions for limit, offset, and project_id. Description mentions 'in a project' which maps to project_id parameter but adds no semantic detail beyond what schema already provides. Baseline 3 appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb+resource ('List tags in a project'). Explains tag semantics ('cross-cutting labels that can be assigned to any prompt'). Explicitly distinguishes from sibling list_prompts by stating this tool provides the IDs needed for that one.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly references sibling tool list_prompts for filtering workflows and mentions report breakdowns ('tag_id dimension'). Provides clear usage context for when to invoke (to obtain tags for filtering/breakdowns). Lacks explicit negative guidance (when NOT to use).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_topicsList TopicsARead-onlyIdempotentInspect
List topics in a project. Topics are folder-like groupings — each prompt belongs to exactly one topic. Use this tool to resolve topic names to IDs before filtering (topic_id filter/dimension, list_prompts), and to label topic IDs from report output with their human-readable names before presenting results. Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: id, name.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| offset | No | Number of results to skip | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnly safety; description adds crucial domain model ('folder-like groupings'), cardinality constraint ('exactly one topic'), and critically discloses return structure '{data: [{id, name}]}' since no output schema exists. Does not cover auth needs or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences efficiently structured: purpose → domain model → usage guidance → return format. Zero waste, front-loaded with action verb, appropriate length for complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete despite lack of output schema because description explicitly states return format. Compensates for 100% schema coverage with conceptual relationships (prompts, reports) and annotations covering safety profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (limit, offset, project_id all documented). Description mentions 'in a project' implying project_id context, but adds no syntax, format details, or validation rules beyond what the schema already provides. Baseline 3 appropriate given schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (List) + resource (topics) + scope (in a project). Distinguishes from sibling tools by defining topics as 'folder-like groupings' and explaining cardinality with prompts ('each prompt belongs to exactly one topic'), clearly differentiating from list_prompts and list_tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly links output to downstream usage: 'Use topic IDs to filter prompts (list_prompts) or break down reports by topic (topic_id dimension)'. Names specific sibling tools (list_prompts implies the get_*_report tools via 'break down reports'). Lacks explicit 'when not to use' exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_project_profileSet Project ProfileAIdempotentInspect
Replace a project's brand profile with the supplied values. All fields are required — the whole profile is overwritten, so first call get_project_profile, merge your changes into the existing values, then send the complete profile here. Saving triggers a background refresh of prompt suggestions. Confirm changes with the user before calling. Audience distribution percentages must sum to 100. The project's display name is not part of the profile and cannot be changed via this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| industry | Yes | ||
| occupation | Yes | ||
| project_id | Yes | The project ID | |
| targetMarkets | Yes | ||
| brandPresentation | Yes | ||
| productsAndServices | Yes | ||
| audienceDistribution | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (idempotentHint=true), the description reveals that 'Saving triggers a background refresh of prompt suggestions' and that the whole profile is overwritten. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, each adding essential information. Front-loaded with the core action. No redundant or unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex input with 7 parameters and nested objects, and no output schema, the description covers all necessary context: replace behavior, prerequisite call, user confirmation, background refresh, sum constraint, and what cannot be changed. Thoroughly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds critical semantics: all fields are required, the whole profile is overwritten, audience distribution must sum to 100, and the display name is not part of the profile. This adds significant meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Replace a project's brand profile with the supplied values', providing a specific verb and resource. It distinguishes from siblings by noting that the display name cannot be changed via this tool, indicating its specific scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly guides the agent to first call get_project_profile, merge changes, then send the complete profile. Also instructs to confirm with user before calling and states the sum constraint for audience distribution. This provides clear when-to-use and how-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_brandUpdate BrandAIdempotentInspect
Update a brand's name, regex, aliases, domains, or color. Changes to name/regex/aliases trigger background metric recalculation; repeat attempts during recalculation will fail. Color updates do not trigger recalculation. Confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| color | No | Hex color like #1A2B3C | |
| regex | No | Pass null to clear an existing regex | |
| aliases | No | ||
| domains | No | ||
| brand_id | Yes | The brand ID to update | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond annotations: it explains that changes to name/regex/aliases trigger background metric recalculation and that repeat attempts during this process will fail. Annotations show readOnlyHint=false (mutation), idempotentHint=true (safe to retry), and destructiveHint=false (non-destructive), but the description adds important operational constraints not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly front-loaded with the core purpose in the first clause, followed by critical behavioral information. Every sentence earns its place: the first states what can be updated, the second explains background processing implications, and the third provides usage guidance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 6 parameters and no output schema, the description does well by explaining behavioral constraints and usage prerequisites. However, it doesn't describe what happens when domains are updated or what the response looks like. Given the complexity and lack of output schema, a bit more about expected outcomes would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 50% (only brand_id and project_id have descriptions). The description adds value by listing the updatable fields (name, regex, aliases, domains) and explaining that passing null to regex clears it. However, it doesn't fully compensate for the lack of descriptions for name, aliases, and domains parameters in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Update' and resource 'brand' with specific fields (name, regex, aliases, domains). It distinguishes from siblings like 'create_brand', 'delete_brand', and 'list_brands' by focusing on modification of existing brands rather than creation, deletion, or listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Confirm with the user before calling' indicates a prerequisite user confirmation step. It also mentions that 'repeat attempts during recalculation will fail', which helps the agent understand when NOT to use the tool (during background processing).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_promptUpdate PromptAIdempotentInspect
Update a prompt's topic and/or tags. Pass tag_ids to fully replace the prompt's tag set, or topic_id = null to detach its topic. Confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| tag_ids | No | New tag set (replaces existing tags) | |
| topic_id | No | New topic ID, or null to detach | |
| prompt_id | Yes | The prompt ID to update | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains that 'tag_ids' fully replaces existing tags and 'topic_id = null' detaches the topic, clarifying mutation behavior. Annotations already indicate it's not read-only, not open-world, idempotent, and non-destructive, so the description doesn't repeat these. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action, followed by specific parameter guidance and a usage instruction. Every sentence adds value: the first states the purpose, the second explains parameter nuances, and the third provides a critical usage guideline. No wasted words, and structure is logical.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (mutation with 4 parameters), rich annotations (covering safety and idempotency), and no output schema, the description is mostly complete. It explains key parameter behaviors and includes a usage confirmation step. However, it doesn't detail error conditions or response format, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds meaningful semantics: it explains that 'tag_ids' replaces the entire tag set (not just adding/removing) and that 'topic_id = null' detaches the topic, which clarifies parameter behavior beyond schema descriptions. This elevates the score above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Update a prompt's topic and/or tags') and distinguishes it from siblings like 'create_prompt' or 'delete_prompt' by focusing on modification rather than creation or deletion. It specifies the exact resources being modified (topic and tags), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Confirm with the user before calling.' It also implicitly distinguishes it from alternatives like 'create_prompt' (for new prompts) or 'delete_prompt' (for removal), though it doesn't name them directly. The instruction to confirm adds a crucial usage constraint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_tagUpdate TagAIdempotentInspect
Update a tag's name or color. Confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| color | No | ||
| tag_id | Yes | The tag ID to update | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, idempotent operation. The description adds value by emphasizing user confirmation, which suggests caution due to mutation. However, it doesn't disclose additional behavioral traits like error conditions, permission requirements, or rate limits beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by a critical guideline. Every word serves a purpose, with no redundancy or unnecessary elaboration, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema and moderate schema coverage, the description covers the basic purpose and a key guideline. However, it lacks details on return values, error handling, or prerequisites, leaving gaps in completeness given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50%, with 'tag_id' and 'project_id' documented in the schema. The description mentions 'name or color,' aligning with the schema properties, but doesn't add meaning beyond this. Since coverage is moderate, the description partially compensates but doesn't fully explain all parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('a tag's name or color'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'update_brand' or 'update_topic', which follow similar patterns for different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a clear usage guideline: 'Confirm with the user before calling,' which provides important context for when to use this tool. However, it doesn't specify alternatives (e.g., when to use 'create_tag' or 'delete_tag' instead) or other contextual exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_topicUpdate TopicAIdempotentInspect
Rename a topic within a project. Confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | New topic name | |
| topic_id | Yes | The topic ID to update | |
| project_id | Yes | The project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (non-readOnly, non-destructive, idempotent), but the description adds valuable context with the confirmation requirement, which is not captured in annotations. It does not contradict annotations and enhances understanding of user interaction needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences that are front-loaded and essential. Every word contributes to clarity and usability, with no wasted information or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's mutation nature, annotations provide good behavioral coverage, but the lack of an output schema means return values are undocumented. The description compensates partially with the confirmation guidance, though more details on output or error handling could improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear parameter descriptions in the schema. The tool description does not add any parameter-specific details beyond what the schema provides, so it meets the baseline for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Rename a topic') and resource ('within a project'), distinguishing it from siblings like 'create_topic' or 'delete_topic'. It precisely defines the tool's function without ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidance with 'Confirm with the user before calling,' indicating a critical prerequisite for invocation. This directly addresses when to use the tool in a way that structured fields do not cover.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!