Skip to main content
Glama

Peec AI MCP Server

Ownership verified

Server Details

Connect your AI assistant to your Peec AI account to monitor and analyze your brand's visibility across AI search engines like ChatGPT, Perplexity, and Gemini. Ask questions about brand visibility, competitor comparisons, source citations, and trends: all in plain language, directly from your AI tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct scope: three report types cover different granularity levels (brand, domain, URL) while five list tools cover separate entity types (projects, brands, prompts, tags, topics). No overlapping functionality between tools.

Naming Consistency5/5

Perfectly consistent snake_case naming throughout. Follows clear verb_noun patterns: 'get_*_report' for analytics generation and 'list_*' for entity discovery. No deviations in style or structure.

Tool Count5/5

Eight tools is well-suited for this focused analytics domain. Covers three reporting levels plus necessary metadata lookup tools for filtering, without bloat. Each tool earns its place in the AI search visibility analysis workflow.

Completeness4/5

Strong coverage for read-only analytics with brand/domain/URL reports and full filtering dimension support. Minor gaps: no single-item retrieval (e.g., get_brand vs list_brands) and no chat-level detail tool, though chat_id filtering in reports provides partial coverage.

Available Tools

8 tools
get_brand_reportGet Brand ReportA
Read-only
Inspect

Get a report on brand visibility, sentiment, and position across AI search engines.

Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns.

Returns {data: rows[], total: number}. Each row contains:

  • brand: {id, name} — the brand being measured

  • visibility: 0–1 ratio — fraction of AI responses that mention this brand. 0.45 means 45% of conversations.

  • mention_count: number of times the brand was mentioned

  • share_of_voice: 0–1 ratio — brand's fraction of total mentions across all tracked brands

  • sentiment: 0–100 scale — how positively AI platforms describe the brand (most brands score 65–85)

  • position: average ranking when the brand appears (lower is better, 1 = mentioned first)

  • Raw aggregation fields (for custom calculations): visibility_count, visibility_total, sentiment_sum, sentiment_count, position_sum, position_count

When dimensions are selected, rows also include the relevant dimension objects: prompt ({id}), model ({id}), tag ({id}), topic ({id}), chat ({id}), date, country_code.

Dimensions explained:

  • prompt_id: individual search queries/prompts

  • model_id: AI search engine (e.g. chatgpt-scraper, perplexity-scraper, google-ai-overview-scraper, gpt-4o-search, google-ai-mode-scraper, gemini-scraper, claude-sonnet-4, microsoft-copilot-scraper, claude-haiku-4.5, claude-3.5-haiku, gpt-4o, gpt-3.5-turbo, gemini-2.5-flash, sonar, deepseek-r1, llama-3.3-70b-instruct, grok-scraper, grok-api, llama-sonar)

  • tag_id: custom user-defined tags

  • topic_id: topic groupings

  • date: (YYYY-MM-DD format)

  • country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE")

  • chat_id: individual AI chat/conversation ID

Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id, tag_id, topic_id, prompt_id, brand_id, country_code, chat_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoNumber of results to skip
filtersNoFilter results using {field, operator, values}. Operators: 'in', 'not_in'. Fields: model_id, tag_id, topic_id, prompt_id, brand_id, country_code, chat_id. Multiple filters are AND'd together.
end_dateYesEnd date in ISO format (YYYY-MM-DD)
dimensionsNoBreak down results by one or more dimensions. prompt_id = individual search queries, model_id = AI search engine, tag_id = custom tags, topic_id = topic groupings, country_code = country, chat_id = individual AI chat/conversation. When set, each row includes the corresponding dimension object.
project_idYesThe project ID
start_dateYesStart date in ISO format (YYYY-MM-DD)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only indicate readOnlyHint=true. The description adds substantial context: explains default aggregation vs dimensional breakdown, documents the return structure {data: rows[], total: number}, and defines metric semantics (visibility 0-1 ratio, sentiment 0-100 scale, position ranking). Does not mention rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose, aggregation note, return format, field definitions, dimension explanations, and filter syntax. Front-loaded with core purpose. While lengthy, every paragraph serves a distinct function without redundancy. Could be slightly more condensed but appropriately comprehensive for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Exceptionally complete given no output schema exists. Compensates by fully documenting the return structure, enumerating all row fields with types and ranges, explaining all 7 dimension options, and detailing filter operators. Combined with 100% schema coverage for inputs, provides everything needed for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions, establishing baseline 3. The description adds significant value by explaining dimension semantics (e.g., model_id includes 'chatgpt-scraper', 'perplexity-scraper'), filter syntax patterns {field, operator, values}, and clarifying that multiple filters are AND'd together.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb and resource: 'Get a report on brand visibility, sentiment, and position across AI search engines.' It clearly distinguishes from siblings like get_domain_report or get_url_report by focusing on brand-specific metrics (visibility, sentiment, share of voice) rather than domain or URL analytics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance on aggregation behavior: 'Results are aggregated for the entire date range by default. Use the 'date' dimension for daily breakdowns.' While it doesn't explicitly name sibling alternatives, the specific brand metrics implicitly guide when to use this vs domain/url reports. Could be improved with explicit comparison to get_domain_report.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_domain_reportGet Domain ReportA
Read-only
Inspect

Get a report on source domain visibility and citations across AI search engines.

Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns.

Returns {data: rows[]}. Each row contains:

  • domain: the source domain (e.g. "example.com")

  • classification: domain type — CORPORATE (official company sites), EDITORIAL (news, blogs, magazines), INSTITUTIONAL (government, education, nonprofit), UGC (social media, forums, communities), REFERENCE (encyclopedias, documentation), COMPETITOR (direct competitors), OWN (the user's own domains), OTHER, or null

  • retrieved_percentage: 0–1 ratio — fraction of chats that included at least one URL from this domain. 0.30 means 30% of chats.

  • retrieval_rate: average number of URLs from this domain pulled per chat. Can exceed 1.0 — values above 1.0 mean multiple pages from the same domain are retrieved per conversation.

  • citation_rate: average number of inline citations when this domain is retrieved. Can exceed 1.0 — higher values indicate stronger content authority.

When dimensions are selected, rows also include the relevant dimension objects: prompt ({id}), model ({id}), tag ({id}), topic ({id}), chat ({id}), date, country_code.

Dimensions explained:

  • prompt_id: individual search queries/prompts

  • model_id: AI search engine (e.g. chatgpt-scraper, perplexity-scraper, google-ai-overview-scraper, gpt-4o-search, google-ai-mode-scraper, gemini-scraper, claude-sonnet-4, microsoft-copilot-scraper, claude-haiku-4.5, claude-3.5-haiku, gpt-4o, gpt-3.5-turbo, gemini-2.5-flash, sonar, deepseek-r1, llama-3.3-70b-instruct, grok-scraper, grok-api, llama-sonar)

  • tag_id: custom user-defined tags

  • topic_id: topic groupings

  • date: (YYYY-MM-DD format)

  • country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE")

  • chat_id: individual AI chat/conversation ID

Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id, tag_id, topic_id, prompt_id, domain, url, country_code, chat_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoNumber of results to skip
filtersNoFilter results using {field, operator, values}. Operators: 'in', 'not_in'. Fields: model_id, tag_id, topic_id, prompt_id, domain, url, country_code, chat_id. Multiple filters are AND'd together.
end_dateYesEnd date in ISO format (YYYY-MM-DD)
dimensionsNoBreak down results by one or more dimensions. prompt_id = individual search queries, model_id = AI search engine, tag_id = custom tags, topic_id = topic groupings, country_code = country, chat_id = individual AI chat/conversation. When set, each row includes the corresponding dimension object.
project_idYesThe project ID
start_dateYesStart date in ISO format (YYYY-MM-DD)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Extensive behavioral disclosure beyond the readOnlyHint annotation: explains default aggregation behavior, how dimensions modify row structure, detailed metric definitions (retrieved_percentage, retrieval_rate, citation_rate), and filter syntax ({field, operator, values}).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose statement. Though lengthy, every sentence delivers value: metric explanations, classification taxonomy, and dimension behaviors. Could be slightly more concise but density is justified by complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates excellently for missing output schema by thoroughly documenting return structure ({data: rows[]}), field definitions (domain, classification enum values, all metrics), and dimension effects. Complete coverage for a complex reporting tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% coverage, the description adds crucial semantic context for dimensions (explaining that prompt_id represents individual search queries, model_id represents specific AI engines, etc.) and clarifies filter operators. This adds meaning beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and clearly identifies the resource (domain visibility/citations report) and scope (across AI search engines). It distinguishes from siblings like get_brand_report and get_url_report by specifying 'source domain' analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context about what the tool analyzes (domain-level metrics) and guidance on using the 'date' dimension for daily breakdowns vs default aggregation. However, it lacks explicit comparison to siblings (e.g., when to choose this over get_brand_report).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_url_reportGet URL ReportA
Read-only
Inspect

Get a report on source URL visibility and citations across AI search engines.

Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns.

Returns {data: rows[]}. Each row contains:

  • url: the full source URL (e.g. "https://example.com/page")

  • classification: page type — HOMEPAGE, CATEGORY_PAGE, PRODUCT_PAGE, LISTICLE (list-structured articles), COMPARISON (product/service comparisons), PROFILE (directory entries like G2 or Yelp), ALTERNATIVE (alternatives-to articles), DISCUSSION (forums, comment threads), HOW_TO_GUIDE, ARTICLE (general editorial content), OTHER, or null

  • title: page title or null

  • citation_count: total number of explicit citations across all chats

  • retrievals: total number of times this URL was used as a source, regardless of whether it was cited

  • citation_rate: average number of inline citations per chat when this URL is retrieved. Can exceed 1.0 — higher values indicate more authoritative content.

When dimensions are selected, rows also include the relevant dimension objects: prompt ({id}), model ({id}), tag ({id}), topic ({id}), chat ({id}), date, country_code.

Dimensions explained:

  • prompt_id: individual search queries/prompts

  • model_id: AI search engine (e.g. chatgpt-scraper, perplexity-scraper, google-ai-overview-scraper, gpt-4o-search, google-ai-mode-scraper, gemini-scraper, claude-sonnet-4, microsoft-copilot-scraper, claude-haiku-4.5, claude-3.5-haiku, gpt-4o, gpt-3.5-turbo, gemini-2.5-flash, sonar, deepseek-r1, llama-3.3-70b-instruct, grok-scraper, grok-api, llama-sonar)

  • tag_id: custom user-defined tags

  • topic_id: topic groupings

  • date: (YYYY-MM-DD format)

  • country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE")

  • chat_id: individual AI chat/conversation ID

Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id, tag_id, topic_id, prompt_id, domain, url, country_code, chat_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoNumber of results to skip
filtersNoFilter results using {field, operator, values}. Operators: 'in', 'not_in'. Fields: model_id, tag_id, topic_id, prompt_id, domain, url, country_code, chat_id. Multiple filters are AND'd together.
end_dateYesEnd date in ISO format (YYYY-MM-DD)
dimensionsNoBreak down results by one or more dimensions. prompt_id = individual search queries, model_id = AI search engine, tag_id = custom tags, topic_id = topic groupings, country_code = country, chat_id = individual AI chat/conversation. When set, each row includes the corresponding dimension object.
project_idYesThe project ID
start_dateYesStart date in ISO format (YYYY-MM-DD)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite the readOnlyHint annotation indicating a safe read operation, the description adds substantial behavioral context: explains default aggregation behavior, documents the complete return structure ({data: rows[]}), defines all row fields including the classification enum values (HOMEPAGE, CATEGORY_PAGE, etc.), and clarifies that citation_rate can exceed 1.0. Effectively compensates for the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While lengthy, the structure is well-organized with the purpose front-loaded, followed by aggregation notes, return structure documentation, and dimension/filter explanations. Given the tool's complexity (nested filters, 7 parameters, complex output shape), the verbosity is justified and every section provides necessary information that the schema cannot convey.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent completeness given the complexity. The description fully documents the output format (compensating for no output schema), enumerates all classification types, explains all dimension options, and details filter operators and valid fields. Provides sufficient information for an agent to construct valid requests and interpret responses without external documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant semantic value by explaining the filter structure syntax ({field, operator, values}), detailing what each dimension represents (e.g., prompt_id as 'individual search queries'), and providing concrete examples like ISO date formats and country codes. Adds meaningful usage context beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence states the specific action ('Get a report') and resource ('source URL visibility and citations across AI search engines'), clearly distinguishing from sibling tools like get_domain_report or get_brand_report by focusing on granular URL-level data rather than aggregated domain or brand metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use the 'date' dimension ('Use the date dimension for daily breakdowns' vs aggregated defaults) and explains how dimension selection affects output structure. Lacks explicit comparison to siblings (e.g., 'use get_domain_report for domain-level aggregation'), but the scope is distinct enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_brandsList BrandsA
Read-only
Inspect

List brands tracked in a project — includes the user's own brand and competitors. Use brand IDs to filter reports (brand_id filter). Returns {data: [{id, name, domains, is_own}]}. is_own indicates which brand belongs to the user.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoNumber of results to skip
project_idYesThe project ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only indicate readOnlyHint=true. Description significantly augments this by disclosing return structure '{data: [{id, name, domains, is_own}]}', explaining the 'is_own' field semantics, and clarifying the data composition (owns + competitors). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four focused sentences front-loaded with purpose. Each sentence delivers distinct value (scope, usage, return shape, field definition). Minor redundancy possible between sentences 3 and 4 regarding return value explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by documenting return structure and field meanings. Covers essential context for a read-only list operation with standard pagination. Could mention pagination behavior explicitly, but limit/offset are self-documenting.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 parameters fully documented). Description mentions 'project' contextually but does not elaborate on parameter syntax, formats, or constraints beyond the schema. Baseline 3 appropriate when schema carries full burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('List') + resource ('brands') + scope ('in a project'). Explicitly distinguishes content by noting it includes both 'user's own brand and competitors', differentiating it from generic list tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive guidance by stating brand IDs are used to 'filter reports (brand_id filter)', implicitly connecting to sibling report tools. Lacks explicit negative guidance (when not to use) or named alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_projectsList ProjectsA
Read-only
Inspect

List all projects the authenticated user has access to. Returns {data: [{id, name, status}]}. The id is used as project_id in other tools. Call this first to discover available projects.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true (safe read), so description focuses on adding return structure documentation '{data: [{id, name, status}]}' and discovery pattern. Adds valuable context beyond annotations since no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: (1) purpose/scope, (2) return format and ID usage, (3) workflow guidance. Front-loaded with the core action and appropriately sized for complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully compensates for missing output schema by documenting return structure. Covers discovery workflow and ID relationships. With readOnly annotation and zero parameters, the description provides complete coverage of all behavioral concerns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, which per rubric establishes a baseline of 4. No parameter information required or provided, which is appropriate for this simple list operation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'List' with clear resource 'projects' and scope 'authenticated user has access to'. Mentions that returned IDs are used as 'project_id' in other tools, distinguishing it from sibling list operations like list_brands or list_tags.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance: 'Call this first to discover available projects' establishes precedence. Also explains output purpose: 'The id is used as project_id in other tools', clarifying how this bridges to sibling tools that require project context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_promptsList PromptsA
Read-only
Inspect

List prompts (conversational questions tracked daily across AI engines) in a project. Supports filtering by topic_id and tag_id. Use prompt IDs to filter reports (prompt_id filter or dimension). Returns {data: [{id, text, tags: [{id}], topic: {id} | null}]}.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoNumber of results to skip
tag_idNoFilter prompts by tag ID
topic_idNoFilter prompts by topic ID
project_idYesThe project ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only indicate readOnlyHint=true. Description adds significant value: defines what prompts represent (domain context), discloses return structure {data: [...]} since no output schema exists, and explains daily tracking behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense sentences plus return structure declaration. Every clause earns its place: purpose, definition, filters, usage guidance, and return shape. No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent completeness given constraints. Compensates for missing output schema by explicitly documenting return structure. Covers required project context and filterable dimensions comprehensively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear property descriptions ('Filter prompts by...'). Description mentions 'supports filtering by topic_id and tag_id' which adds minimal semantic depth beyond schema, meeting the baseline for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (List) and resource (prompts), with parenthetical clarifying that prompts are conversational questions tracked daily. Mentions relationship to reports, distinguishing from sibling report tools, though it doesn't explicitly name them as alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive guidance: 'Use prompt IDs to filter reports (prompt_id filter or dimension),' establishing the workflow purpose. Lacks explicit negative guidance (when not to use), but effectively explains the tool's role in the ecosystem.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_tagsList TagsA
Read-only
Inspect

List tags in a project. Tags are cross-cutting labels that can be assigned to any prompt. Use tag IDs to filter prompts (list_prompts) or break down reports by tag (tag_id dimension). Returns {data: [{id, name}]}.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoNumber of results to skip
project_idYesThe project ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm readOnlyHint=true (safe read). Description adds valuable return structure disclosure '{data: [{id, name}]}' not present in schema or annotations. Documents the output shape, which aids in response handling. Does not disclose pagination behavior or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. First sentence declares purpose, second explains domain concept (tags), third connects to sibling workflows and return format. Front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list operation with 3 parameters and no output schema, description compensates by detailing return structure. Cross-references siblings (list_prompts, reports). Could mention that results respect limit/offset pagination, but adequately complete overall.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete descriptions for limit, offset, and project_id. Description mentions 'in a project' which maps to project_id parameter but adds no semantic detail beyond what schema already provides. Baseline 3 appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb+resource ('List tags in a project'). Explains tag semantics ('cross-cutting labels that can be assigned to any prompt'). Explicitly distinguishes from sibling list_prompts by stating this tool provides the IDs needed for that one.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly references sibling tool list_prompts for filtering workflows and mentions report breakdowns ('tag_id dimension'). Provides clear usage context for when to invoke (to obtain tags for filtering/breakdowns). Lacks explicit negative guidance (when NOT to use).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_topicsList TopicsA
Read-only
Inspect

List topics in a project. Topics are folder-like groupings — each prompt belongs to exactly one topic. Use topic IDs to filter prompts (list_prompts) or break down reports by topic (topic_id dimension). Returns {data: [{id, name}]}.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return
offsetNoNumber of results to skip
project_idYesThe project ID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnly safety; description adds crucial domain model ('folder-like groupings'), cardinality constraint ('exactly one topic'), and critically discloses return structure '{data: [{id, name}]}' since no output schema exists. Does not cover auth needs or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences efficiently structured: purpose → domain model → usage guidance → return format. Zero waste, front-loaded with action verb, appropriate length for complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete despite lack of output schema because description explicitly states return format. Compensates for 100% schema coverage with conceptual relationships (prompts, reports) and annotations covering safety profile.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (limit, offset, project_id all documented). Description mentions 'in a project' implying project_id context, but adds no syntax, format details, or validation rules beyond what the schema already provides. Baseline 3 appropriate given schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (List) + resource (topics) + scope (in a project). Distinguishes from sibling tools by defining topics as 'folder-like groupings' and explaining cardinality with prompts ('each prompt belongs to exactly one topic'), clearly differentiating from list_prompts and list_tags.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly links output to downstream usage: 'Use topic IDs to filter prompts (list_prompts) or break down reports by topic (topic_id dimension)'. Names specific sibling tools (list_prompts implies the get_*_report tools via 'break down reports'). Lacks explicit 'when not to use' exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources