Saber
Server Details
Sales intelligence — research companies, qualify prospects, and find contacts.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
62 toolscompany_lists-count_previewBInspect
Preview expected company count and credit cost
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The mention of 'credit cost' adds valuable financial context beyond the annotations, warning that this operation has billing implications despite being labeled a 'preview.' However, with readOnlyHint: false in annotations, the description does not clarify why a preview operation requires write permissions or what state changes (logging, credit holds) occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is extremely terse with no wasted words, front-loading the key outputs (count and credit cost) immediately. However, for a tool with complex filtering capabilities and financial implications, the brevity approaches under-specification rather than optimal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex nested filter schema and absence of an output schema, the description inadequately explains what the preview returns (e.g., exact count vs. estimate, credit cost format) or behavioral side effects. The financial and operational context is insufficient for safe agent invocation of a credit-consuming operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not discuss parameters, but with 100% schema description coverage, the input schema already comprehensively documents the complex filter object including names, sizes, locations, and exclusions. The baseline score applies since the schema carries the full burden of parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Preview expected company count and credit cost' clearly identifies the action (preview) and outputs (count, credit cost), distinguishing this estimation tool from sibling retrieval operations like company_lists-get_companies. However, it misses the opportunity to explicitly differentiate from the actual export/creation tools or clarify that this validates filter criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this preview versus actually retrieving data via company_lists-get_companies or exporting via company_lists-export. It fails to mention prerequisites, such as requiring refined filters to avoid wasting credits on oversized result sets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-createCInspect
Create a company list
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds no behavioral context beyond what the annotations provide (write operation, non-idempotent, non-destructive). It fails to explain that this creates a persistent saved list, what the return value indicates, or how the complex filter criteria are applied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief (4 words), this represents under-specification rather than effective conciseness. The single phrase wastes the opportunity to front-load critical context about the required filter structure or creation behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the nested filter object (with 10+ filter types including exclusions, locations, technologies), the description is inadequate. It does not mention that filters are required, what the list contains, or how it relates to the output (which lacks a schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides no parameter information, but the input schema has 100% description coverage with detailed explanations for all filter fields (names, sizes, exclude criteria, etc.). Per the rubric, with high schema coverage the baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a company list' is a tautology that restates the tool name (company_lists-create). While it identifies the resource type, it fails to distinguish this tool from siblings like company_lists-import (which also creates lists via upload) or clarify the creation mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., company_lists-import for bulk importing existing lists, or company_lists-update for modifying existing lists). There is no mention of prerequisites or expected workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-deleteCDestructiveIdempotentInspect
Delete a company list
| Name | Required | Description | Default |
|---|---|---|---|
| listId | Yes | The unique identifier of the company list to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover basic hints (destructive, idempotent, openWorld), the description adds zero behavioral context. It doesn't explain what idempotency means here (second delete returns 404 or 200?), what 'openWorld' implies for this resource, or what happens to contained companies when the list is deleted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at four words. While content-poor, it is appropriately sized for the simple operation and front-loaded with the critical action verb. No wasted sentences or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation (destructiveHint=true), the description is dangerously incomplete. It lacks critical safety context about data loss scope, recovery options, or side effects on related resources (companies in the list). Reliance on annotations alone is insufficient for destructive tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents the listId parameter. The description mentions no parameters, but the baseline score of 3 is appropriate since the schema carries the semantic burden adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a company list' is tautological—it simply restates the tool name (company_lists-delete) in sentence form. It fails to distinguish this tool from the sibling contact_lists-delete or clarify scope (does it delete just the list container or the companies within?).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives (e.g., company_lists-update to modify vs. delete), prerequisites (empty list required?), or irreversibility warnings despite the destructive nature of the operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-exportCInspect
Export a company list as CSV
| Name | Required | Description | Default |
|---|---|---|---|
| listId | Yes | The unique identifier of the company list | |
| __requestBody | No | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate `readOnlyHint: false` and `idempotentHint: false`, suggesting this creates/modifies state (likely generating a file record or job), yet the description frames this as a simple data retrieval. It fails to disclose what side effects occur, where the CSV is stored, how to retrieve it, or why it is not idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at six words with no redundancy or filler. However, given the tool's complexity (nested request body, side effects, signal templates), this brevity becomes under-specification rather than efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Critically incomplete for an export tool with `readOnlyHint: false`. Missing: return value specification (file URL, download link, job ID, or stream), explanation of the generated artifact's lifecycle, and clarification on how `signalTemplateIds` affects CSV content. Annotations cover safety but not operational semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% at the top level, establishing a baseline. The description adds no context for the nested `fields` (column selection) or `signalTemplateIds` parameters, nor does it explain that `__requestBody` contains the export configuration options.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description identifies a specific action (Export), resource (company list), and output format (CSV), distinguishing it from siblings like `company_lists-get` (likely returns JSON) and `company_lists-import`. However, it omits scope details (e.g., whether it exports all companies or respects filters) and the delivery mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus `company_lists-get_companies` or `company_lists-search`. No mention of prerequisites (e.g., list must exist) or whether this is appropriate for one-time downloads versus API integrations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-getCRead-onlyIdempotentInspect
Get a company list by ID
| Name | Required | Description | Default |
|---|---|---|---|
| listId | Yes | The unique identifier of the company list |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, covering the safety profile. The description adds no behavioral context beyond this—omitting what data is returned (metadata vs. contents), caching behavior, or implications of openWorldHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely compact at only five words with no filler. However, it may be overly terse given the potential for confusion with sibling tools; one additional clarifying sentence would improve utility without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (single string parameter) and rich annotations, the description is minimally adequate. However, given the complexity of the sibling ecosystem (particularly 'company_lists-get_companies'), it lacks crucial context about what constitutes a 'company list' object versus its contained entities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter listId is already well-documented as 'The unique identifier of the company list'. The description mentions 'by ID' which aligns with the parameter, but adds no additional syntax guidance, format examples, or validation rules beyond the schema baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resource ('company list') and scope ('by ID'). However, it fails to distinguish from the sibling tool 'company_lists-get_companies' (which retrieves companies within a list), creating potential confusion between retrieving list metadata versus list contents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'company_lists-list' (to browse all lists) or 'company_lists-get_companies' (to retrieve the actual companies within a list). No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-get_companiesCRead-onlyIdempotentInspect
Get companies in a list
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of companies to return (1–100, default 25) | |
| listId | Yes | The unique identifier of the company list | |
| offset | No | Number of companies to skip for pagination (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true and idempotentHint=true, the description adds no behavioral context beyond this. It does not mention pagination behavior (despite offset/limit parameters), what data is returned, or any rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at four words, which prevents verbosity but results in under-specification. While not wasting words, the single sentence does not fully earn its place given the missing contextual information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description should explain what data structure or fields are returned. It fails to do so. Combined with no usage guidelines and minimal behavioral context, the description is incomplete despite the well-documented input schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all three parameters (listId, limit, offset). The description adds no additional parameter context, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Get') and identifies the resource ('companies') and context ('in a list'), making the basic purpose understandable. However, it fails to distinguish from the sibling tool 'company_lists-get' which likely retrieves list metadata rather than the companies within it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'company_lists-get' (which likely retrieves list metadata) or 'company_lists-search'. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-importCInspect
Import a company list from HubSpot
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations disclose the write operation (readOnlyHint: false) and external system interaction (openWorldHint: true), the description adds no behavioral context—such as whether the import is synchronous, what happens to existing list members, or error handling for invalid HubSpot filters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The six-word sentence contains no fluff and front-loads the action, but it is arguably too brief given the tool's complexity (nested filter objects, external API dependency), leaving it underspecified rather than optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool involving external system integration, complex nested filtering, and list creation side effects, the description is insufficient. It omits expected behaviors, return value structure, and the relationship between the HubSpot filter and resulting list contents.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage including examples for 'name', 'type', and filter operators, the schema carries the semantic burden. The description adds no parameter-specific guidance, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the core action (import) and source (HubSpot), but lacks scope definition—it's unclear whether this creates a new list or updates an existing one, and it fails to differentiate from sibling 'company_lists-create'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus 'company_lists-create' or other list management tools, nor any mention of prerequisites like HubSpot authentication or existing connector setup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-listCRead-onlyIdempotentInspect
List all company lists
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of lists to return (1–100, default 20) | |
| offset | No | Number of lists to skip for pagination (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, establishing this as a safe read operation. The description adds no behavioral context beyond the tautology—no information about pagination behavior, caching, rate limiting, or what 'all' encompasses (user scope vs. organization scope). With annotations covering safety, the description meets minimal expectations but adds no value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at four words. While it avoids verbosity, it borders on under-specification rather than effective conciseness. There is no structure to speak of—just a single phrase. It is front-loaded but sacrifices necessary detail for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 optional parameters, no nested objects) and rich annotations/schema, the description could be adequate if it distinguished from siblings. However, the presence of company_lists-search and company_lists-get creates a gap—the description does not clarify when this unfiltered list is preferred over search. For a resource listing tool with no output schema, it minimally suffices but leaves significant gaps in sibling differentiation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for both limit and offset parameters, the baseline score is 3. The description adds no additional semantic context for these parameters (e.g., no mention of pagination strategy or recommended page sizes), but it does not need to compensate given the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List all company lists' is essentially a tautology that restates the tool name (company_lists-list). While it identifies the verb (List) and resource (company lists), it fails to distinguish this tool from siblings like company_lists-search (filtered results) or company_lists-get (single item retrieval). The addition of 'all' is the only differentiator but lacks specificity regarding scope or filtering capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. Given the existence of company_lists-search and company_lists-get, the description should clarify when to use the full list versus search or single retrieval. There are no prerequisites, pagination guidance beyond the schema, or conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-searchCInspect
Search companies matching a filter
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate readOnlyHint=false and openWorldHint=true, the description adds no context about what side effects occur during a 'search' operation or why it isn't read-only. It doesn't explain return format, pagination, or whether results are persisted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While brief (5 words), it is inappropriately sized for the tool's complexity—too short to be informative. It lacks front-loaded key details about the search scope or return values, rendering the conciseness a liability rather than a virtue.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich input schema (14+ filter fields with nested objects) and absence of an output schema, the description fails to specify what the search returns (company IDs? full records? count?) or how to construct effective filters. Inadequate for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured documentation carries the parameter semantics. The description merely references 'a filter' without explaining the complex nested structure (exclude criteria, location objects, date ranges) or providing examples, meeting the baseline but adding no value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Search companies matching a filter' essentially restates the tool name (tautology) without clarifying whether this searches across all companies or within existing lists. It fails to distinguish from siblings like 'company_lists-get_companies' or 'contacts-search', leaving the scope ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'company_lists-get_companies' (which likely retrieves existing list members) or 'contacts-search'. No mention of prerequisites, rate limits, or expected workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_lists-updateCIdempotentInspect
Update a company list
| Name | Required | Description | Default |
|---|---|---|---|
| listId | Yes | The unique identifier of the company list to update | |
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotentHint=true, readOnlyHint=false, and destructiveHint=false, covering the basic safety profile. However, the description adds no behavioral context beyond these annotations—such as whether updates are partial (PATCH) or full replacement (PUT), or how changing filters affects existing companies in the list.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at four words, which technically minimizes word count. However, it is under-specified rather than appropriately concise—every sentence should earn its place, but this single sentence provides minimal value given the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex nested structure of the filter parameter (with multiple sub-objects for exclusion, location, founded dates, etc.) and the lack of an output schema, the description is inadequate. It fails to explain update semantics, return values, or side effects necessary for proper agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage per context signals, the input schema already documents the listId and __requestBody parameters (including nested filter fields) adequately. The description adds no parameter-specific guidance, but meets the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update a company list' is a tautology that restates the tool name (company_lists-update). While it identifies the resource and action, it fails to specify what attributes can be updated (name, filter criteria) and does not distinguish from sibling operations like company_lists-create or company_lists-delete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., create vs. update), nor does it mention prerequisites like needing an existing listId. There are no exclusions or conditional usage patterns described.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_signals-createAInspect
Create a company signal asynchronously — returns immediately with a pending status; poll the returned ID or receive the result via webhook
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds substantial behavioral context beyond annotations: explains the async pattern ('returns immediately with a pending status'), the polling mechanism, and webhook integration. Annotations only hint at external operations (openWorldHint:true) without explaining the async flow or result retrieval patterns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single well-structured sentence that front-loads the action, explains the async nature, and concludes with result handling options. Zero redundancy—every clause adds distinct value about behavior or result handling.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the complexity level given 100% schema coverage, explaining the async lifecycle. However, gaps remain: no explanation of what 'company signals' are (domain concept), no differentiation from batch creation sibling, and no output schema to complement the input documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the rich input schema (with detailed examples for qualificationCriteria, connectors, etc.) carries the semantic load. Description mentions 'returned ID' for polling, adding some output context, but does not elaborate on input parameters beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb (Create) and resource (company signal) with async execution model specified. Lacks explicit distinction from sibling tool 'company_signals-create_batch', which also creates signals but in batch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational guidance on result retrieval ('poll the returned ID or receive the result via webhook') implying the async interaction pattern. However, lacks explicit when-to-use guidance versus alternatives like the batch creation sibling or when to use webhook vs polling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_signals-create_batchAInspect
Create multiple company signals in batch — combines domains and questions using a Cartesian product; use templates for batches over 100 signals
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover the safety profile (not read-only, not destructive). The description adds valuable behavioral context about the Cartesian product combination logic and scaling recommendations. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient clauses: the first defines the operation, the second provides the usage threshold. No filler words; every term earns its place despite the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Captures the core batch mechanism and scaling guidance, but given the complexity (async/sync modes, webhook options, nested signal objects), the description is minimal. It omits the async processing option and response shape differences mentioned in the schema descriptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds conceptual mapping by referencing 'domains and questions' which correspond to the 'domains' and 'signals' parameters, and mentions 'templates' aligning with 'templateId'. No additional syntax or format details are provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create'), resource ('company signals'), and scope ('multiple...in batch'). It distinguishes from the singular sibling 'company_signals-create' by emphasizing batch processing and uniquely mentions the Cartesian product logic that defines this tool's behavior.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific operational guidance with the '100 signals' threshold for using templates. However, it lacks explicit 'when not to use' guidance or direct comparison to the singular 'company_signals-create' alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_signals-getARead-onlyIdempotentInspect
Get a company signal by ID — returns current status and AI-generated answer if completed
| Name | Required | Description | Default |
|---|---|---|---|
| signalId | Yes | The unique identifier of the signal (UUID format) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable response semantics beyond annotations: discloses that response contains status field and conditionally includes 'AI-generated answer' only when completed. Annotations cover safety (readOnly/idempotent) but description explains the business logic/return shape.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with em-dash separation. Front-loaded action ('Get...') followed by return value description. No redundant words or repetition of schema/annotation data.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple getter with 100% schema coverage and good annotations, description adequately covers the missing output schema by explaining return values (status and conditional AI answer). Comprehensive enough for agent to understand full tool behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete signalId documentation (UUID format). Description mentions 'by ID' which aligns with parameter purpose but doesn't add semantic meaning beyond what schema already provides. Baseline 3 appropriate for high-coverage schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' + resource 'company signal' + scope 'by ID'. Clearly distinguishes from sibling 'company_signals-list' (which returns multiple) by specifying single-item retrieval via ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'by ID' (specific lookup vs browsing) and explains return payload ('current status and AI-generated answer if completed'), which signals when to poll this endpoint. Lacks explicit 'use instead of list when...' but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_signals-listCRead-onlyIdempotentInspect
List company signals with optional filters for domain, company ID, date range, and status
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results per page | |
| domain | No | Filter signals by company domain (e.g., "acme.com") | |
| offset | No | Number of results to skip for pagination | |
| status | No | Filter by signal status (can be specified multiple times for multiple statuses) | |
| toDate | No | Filter signals completed on or before this date (RFC3339 format) | |
| fromDate | No | Filter signals completed on or after this date (RFC3339 format) | |
| companyId | No | Filter signals by company ID | |
| subscriptionId | No | Filter signals by subscription ID (UUID of the signal subscription that triggered execution) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
SafetyProfile annotations cover read-only/destructive/idempotent traits, but the description adds no behavioral context beyond this. It omits pagination behavior (limit/offset handling), result structure hints, or what constitutes a 'signal' in this business domain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with front-loaded verb. Efficient structure, though it wastes space enumerating parameter names that are already well-documented in the schema. No redundant fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given comprehensive annotations and 100% schema coverage, the description is minimally adequate. However, with no output schema provided, the description should have indicated return structure or pagination behavior to reach completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description redundantly lists parameter names (domain, company ID, date range, status) but adds no semantic meaning beyond what the schema already provides (e.g., no guidance on date format interpretation or subscriptionId relationships).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('List') and resource ('company signals'), but 'company signals' is domain-specific jargon left undefined. It fails to distinguish from sibling tool 'company_signals-get' (likely for retrieving specific signals by ID) or explain how this differs from 'market_signals-list_signals'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that filters are 'optional' but provides no guidance on when to use this tool versus alternatives like company_signals-get, or when to apply specific filters. No mention of pagination strategies or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_signals-subscription_logsBRead-onlyIdempotentInspect
List signal executions for a specific subscription
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results per page | |
| domain | No | Filter signals by company domain | |
| offset | No | Number of results to skip for pagination | |
| status | No | Filter by signal status (can be specified multiple times) | |
| toDate | No | Filter signals completed on or before this date (RFC3339 format) | |
| fromDate | No | Filter signals completed on or after this date (RFC3339 format) | |
| companyId | No | Filter signals by company ID | |
| subscriptionId | Yes | The UUID of the signal subscription |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint, idempotentHint, and destructiveHint, covering the safety profile. The description adds the context that this retrieves 'executions' (historical runs), which is meaningful beyond the annotations. However, it omits behavioral details like pagination behavior (despite limit/offset parameters), whether results are ordered chronologically, or what the openWorldHint implies for this specific dataset.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently structured with no wasted words. However, given the tool's complexity (8 parameters with rich filtering and pagination), it borders on underspecification rather than appropriate conciseness. The core message is front-loaded and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the core purpose is stated, the description inadequately represents the tool's rich filtering capabilities (date ranges, status filtering, company/domain filters) and pagination model. With no output schema provided, the description should ideally indicate what data structure or fields are returned for each execution, which it does not.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 8 parameters including filtering options (domain, status, date ranges) and pagination. The description adds no parameter-specific guidance, earning the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('List') and resource ('signal executions') with scope ('for a specific subscription'). It effectively distinguishes this from sibling subscription management tools (like signal_subscriptions-get) by specifying it retrieves execution data rather than configuration. However, it does not explicitly clarify when to use this versus subscription_actions-list or other historical data tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like market_signals-list_signals or subscription_actions-list. It does not mention prerequisites (such as needing a valid subscription ID first) or exclusion criteria (e.g., only works for completed subscriptions).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connectors-listARead-onlyIdempotentInspect
List connectors and their connection status
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the operation as read-only, idempotent, and non-destructive. The description adds minimal value by specifying that 'connection status' is included in the response, but it does not disclose pagination behavior, rate limits, or what constitutes a connector's 'status'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence that front-loads the action and scope without filler words or redundant restatement of the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (no parameters, simple read operation) and rich annotations covering safety and idempotency, the description provides adequate context for invocation. It appropriately omits output schema details since none is provided, though mentioning pagination would have strengthened it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters in the input schema, the baseline score applies. The description does not need to compensate for missing schema documentation, as there are no parameters requiring semantic explanation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('List') and resource ('connectors and their connection status'), clearly distinguishing it from siblings like 'company_lists-list' or 'contact_lists-list' which handle different resources. It narrowly misses a 5 because it assumes domain knowledge of what 'connectors' refers to without contextualizing it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or filtering limitations. While there are no direct sibling alternatives for 'connectors', the description fails to indicate appropriate usage contexts (e.g., polling vs. initial setup).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contact_lists-createAInspect
Create a contact list — runs a Sales Navigator search and stores a snapshot of matching contacts
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations by specifying 'snapshot,' which implies point-in-time data capture rather than live/refreshing lists. It also identifies the external system ('Sales Navigator') being queried. It does not disclose rate limits or sync behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient single sentence using an em-dash to separate the action from the mechanism. Every clause delivers essential information: the operation type, the external integration, and the storage behavior. Zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich schema (100% coverage) and presence of annotations covering mutability hints, the description adequately covers the tool's purpose. The 'snapshot' disclosure addresses the key behavioral gap left by structured data. Minor gap: no mention of output/return value behavior, though absence of output schema makes this acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields already document parameters thoroughly. The description adds implicit context that filters are used for Sales Navigator searches, but does not elaborate on specific parameter semantics beyond what the schema provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the action ('Create'), resource ('contact list'), and mechanism ('runs a Sales Navigator search and stores a snapshot'). It clearly distinguishes from sibling tools like contact_lists-import or contact_lists-get by specifying the search-and-snapshot workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'runs a Sales Navigator search,' suggesting it's for creating lists from search criteria rather than importing existing contacts. However, it lacks explicit guidance on when to use this versus contact_lists-import or other alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contact_lists-deleteADestructiveIdempotentInspect
Delete a contact list and all its stored contacts
| Name | Required | Description | Default |
|---|---|---|---|
| listId | Yes | The unique identifier of the contact list |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate the operation is destructive and idempotent, but the description adds critical behavioral context not found in annotations: that deletion cascades to 'all its stored contacts' within the list. This clarifies the blast radius of the destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded with the action verb and contains zero redundancy. Every word earns its place by conveying both the operation and its cascading impact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich annotations (destructive, idempotent hints) and simple single-parameter schema, the description is sufficiently complete. It successfully communicates the cascading delete behavior which is the key missing context from the structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of listId. The description mentions 'contact list' generally but does not explicitly reference the listId parameter or add syntax guidance beyond the schema. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States a specific verb ('Delete') and resource ('contact list') plus scope ('all its stored contacts'). However, it does not explicitly differentiate from sibling tools like contact_lists-update or company_lists-delete within the description text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance by warning that stored contacts are also deleted (cascading effect), which helps the agent understand the scope. Lacks explicit 'when to use' guidance or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contact_lists-getARead-onlyIdempotentInspect
Get a contact list by ID — returns metadata and contact count
| Name | Required | Description | Default |
|---|---|---|---|
| listId | Yes | The unique identifier of the contact list |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, covering the safety profile. The description confirms the read-only nature implicitly by stating what is returned (metadata), but does not add additional behavioral context like rate limits, authentication requirements, or error behavior when the ID is not found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact and efficiently structured in two clauses: the first states the action and resource, the second clarifies the return payload. Every element earns its place, with the em-dash effectively separating the operation from the output specification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description appropriately compensates by specifying the return value contains 'metadata and contact count'. For a simple read operation with rich annotations and a single parameter, this is sufficient, though mentioning error handling (e.g., if ID not found) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single 'listId' parameter. The description references the ID ('by ID'), aligning with the schema, but does not add semantic meaning beyond what the schema already provides. With high schema coverage, this baseline score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('contact list') with clear scope ('by ID'). It effectively distinguishes from siblings like 'contact_lists-get_contacts' and 'contact_lists-list' by specifying the return value includes 'metadata and contact count' rather than the actual contact records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the return value specification (metadata vs. actual contacts), suggesting when to use this versus 'contact_lists-get_contacts'. However, it lacks explicit 'when to use' or 'when not to use' guidance and does not name sibling alternatives directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contact_lists-get_contactsARead-onlyIdempotentInspect
Get contacts in a list — returns the stored snapshot, no new Sales Navigator call
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of contacts to return (1–100, default 25) | |
| listId | Yes | The unique identifier of the contact list | |
| offset | No | Number of contacts to skip for pagination (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite annotations indicating read-only/idempotent status, the description adds valuable implementation context: it explicitly states the tool returns a 'stored snapshot' and makes 'no new Sales Navigator call', informing the agent about data freshness and external API usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action ('Get contacts in a list') and appends the critical behavioral qualifier. Zero wasted words; every phrase earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with three flat parameters and no output schema, the description is complete. It clarifies the return value ('stored snapshot') and distinguishes the data source behavior, though it could optionally mention pagination patterns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters (listId, limit, offset). The description does not add parameter-specific semantics beyond what the schema provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Get') + resource ('contacts in a list') and clearly distinguishes from sibling tool 'contact_lists-get' by specifying it retrieves the contacts contents rather than the list metadata itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'no new Sales Navigator call' provides clear behavioral context that implies when to use this tool (when cached/stored data is acceptable) versus live search alternatives, though it does not explicitly name the specific alternative tools like 'contacts-search'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contact_lists-listBRead-onlyIdempotentInspect
List all contact lists — paginated
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of lists to return (1–100, default 20) | |
| offset | No | Number of lists to skip for pagination (default 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, destructive, idempotent). Description adds 'paginated' which discloses pagination behavior not captured in structured annotations, but omits details on return structure, rate limits, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at four words. Core purpose front-loaded with em-dash separator for behavioral modifier. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for low-complexity tool (2 optional params, flat schema). Rich annotations cover behavioral safety. Description sufficiently covers the pagination pattern; output schema absence acceptable for standard list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for limit/offset. Description mentions 'paginated' which provides semantic context for these parameters, but adds no syntax details beyond what the schema already provides. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'List' and resource 'contact lists' with scope 'all' and pagination hint. Distinguishes plural retrieval from single-record siblings (e.g., contact_lists-get) via the word 'all', though lacks explicit contrastive guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus contact_lists-get (single record) or contact_lists-search (filtered). No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contact_lists-updateBIdempotentInspect
Rename a contact list
| Name | Required | Description | Default |
|---|---|---|---|
| listId | Yes | The unique identifier of the contact list | |
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description aligns with annotations (rename implies non-destructive write, consistent with readOnlyHint=false and destructiveHint=false). However, it adds no context about the idempotentHint=true, openWorldHint=true, or what happens to existing contacts when a list is renamed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at 4 words, with no redundancy. However, given the nested object structure and 'additionalProperties: true' in the schema, the brevity leaves significant gaps in understanding the full parameter contract.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a mutation tool with nested objects. The description only mentions renaming, but the schema's 'additionalProperties: true' suggests other fields might be updateable. No mention of output behavior, error cases, or that only the name field is required in the request body.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description implies the 'name' parameter through the word 'Rename' but adds no semantic clarification beyond what the schema provides for listId or the nested request body structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb (rename) and resource (contact list), matching the tool name. However, it fails to differentiate from the similar sibling 'company_lists-update' or clarify that this only updates the name field vs other potential updates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'contact_lists-create' or prerequisites like obtaining the listId. No mention of idempotency behavior despite the operation being safe to retry.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contacts-create_researchAInspect
Start a contact research job — AI gathers insights from LinkedIn and other sources asynchronously; use contacts.get_research to poll for results
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotates external data sources ('LinkedIn and other sources') confirming openWorldHint=true. Explains async execution model not covered by annotations. Mentions AI involvement. Could add details about job lifecycle, failure modes, or webhook delivery guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two precise clauses: first defines the action and mechanism, second provides the procedural next step. Front-loaded with the primary verb. No redundant text. The em-dash and semicolon structure efficiently separates distinct pieces of information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Well-suited for an async job creation tool with no output schema. Explains the polling pattern via sibling reference and the external data nature. Given the complexity (AI + external sources + async), it covers the essential workflow, though mentioning webhook behavior or job retention would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds semantic context by mentioning 'LinkedIn' (mapping to contactProfileUrl/linkedInSalesNavigatorUrl) and 'AI gathers insights' (explaining purpose of name/company fields). Does not explicitly explain webhookUrl's callback purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Start' + resource 'contact research job' clearly defines the action. Explicitly distinguishes from sibling 'contacts-get_research' by contrasting the initiation vs. polling actions, and implicitly distinguishes from 'contacts-create_signal' by specifying 'research'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly directs users to 'use contacts.get_research to poll for results,' establishing the async workflow pattern. States operation is 'asynchronously' executed. Could be improved by explicitly stating when NOT to use (e.g., when immediate results are required) or mentioning the webhookUrl alternative for notification.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contacts-create_signalAInspect
Create a contact signal asynchronously — returns immediately with a pending status; poll the returned ID or receive the result via webhook
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (which indicate it's a non-idempotent write), the description adds crucial behavioral context about the async lifecycle: immediate return with pending status, polling mechanisms, and webhook delivery. This disclosure of the execution pattern and result retrieval methods is valuable behavioral transparency not present in the structured metadata.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence delivering four distinct pieces of information: the action, async nature, immediate return behavior, and result retrieval options. No words are wasted; the structure front-loads the core verb and efficiently packs operational details using em-ddash and semicolon separation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 11+ parameters, nested objects, and an async workflow, the description adequately covers the mechanical execution pattern but lacks conceptual completeness. It doesn't explain what constitutes a 'signal' (AI-generated insight vs other data), the significance of parameters like 'answerType' or 'qualificationCriteria', or error handling scenarios. Given no output schema exists, more conceptual grounding was needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage per context signals, the schema adequately documents parameters (e.g., forceRefresh description, connector configuration details). The description appropriately focuses on high-level behavior rather than repeating parameter semantics, meeting the baseline expectation when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Create a contact signal') and execution model ('asynchronously'), which distinguishes it from synchronous retrieval tools like 'contacts-get_signal'. However, it fails to clarify how a 'signal' differs semantically from 'research' (sibling tool 'contacts-create_research'), leaving ambiguity about which creation tool to use.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides concrete usage guidance for result retrieval ('poll the returned ID or receive the result via webhook'), implying this is appropriate for asynchronous workflows. However, it lacks explicit guidance on when to choose this over 'contacts-create_research' or other signal-related tools, and doesn't mention prerequisites like the webhook configuration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contacts-get_researchARead-onlyIdempotentInspect
Get contact research by ID — returns status and AI-generated insights if completed
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The unique identifier of the contact research request |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond annotations: discloses this checks async job status ('returns status') and that insights are conditionally returned ('if completed'). Correctly implies read-only, idempotent behavior consistent with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence uses em-dash effectively to separate the action from the return value description. Front-loaded with verb and resource; zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description adequately compensates by disclosing return structure (status + conditional insights) and the incomplete-state possibility. Appropriate for a read-only retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the parameter 'id' is fully documented in the schema itself. Description mentions 'by ID' aligning with the schema but adds no additional syntax or format details beyond the structured definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get) and resource (contact research) with scope (by ID). However, given sibling 'getContactResearchByExternalID' exists, the description could clarify this retrieves by the internal request ID, not external ID, to prevent confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'if completed' implies an async polling pattern (use after triggering research via create_research), but lacks explicit guidance on when to use versus siblings like contacts-create_research or getContactResearchByExternalID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contacts-get_signalARead-onlyIdempotentInspect
Get a contact signal by ID — returns current status and AI-generated answer if completed
| Name | Required | Description | Default |
|---|---|---|---|
| signalId | Yes | The unique identifier of the contact signal (UUID format) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent). Description adds valuable behavioral context: return payload includes 'current status' and conditional 'AI-generated answer if completed', revealing the signal lifecycle (pending → completed) without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with zero waste. Front-loaded with action ('Get'), immediately qualified by resource and ID parameter, followed by return value description. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with one parameter and no output schema, description adequately compensates by describing return values (status, conditional AI answer). Would be complete with output schema, but sufficient for tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage for the single signalId parameter. 'By ID' in description reinforces parameter purpose but doesn't add semantics beyond the well-documented schema. Baseline 3 appropriate for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States clear verb ('Get'), resource ('contact signal'), and scope ('by ID'). Distinguishes from sibling list/create operations via 'by ID' and mentions specific return content ('AI-generated answer'), though could explicitly differentiate from company_signals-get.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'by ID' (suggests use when you have a specific ID), but lacks explicit when/when-not guidance or named alternatives like contacts-list_signals for browsing without an ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contacts-list_signalsBRead-onlyIdempotentInspect
List contact signals with optional filters for LinkedIn profile URL and pagination
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results per page | |
| offset | No | Number of results to skip for pagination | |
| contactProfileUrl | No | Filter signals by contact profile URL (LinkedIn or other professional profile) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Correctly implies read-only operation ('List') matching annotations (readOnlyHint=true, destructiveHint=false), and mentions 'pagination' which alludes to the limit/offset behavior. However, it doesn't leverage the openWorldHint or idempotentHint annotations to explain external data handling or retry safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, front-loaded sentence with zero redundancy. Every word conveys purpose (List), resource (contact signals), or parameters (filters, LinkedIn URL, pagination).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 3-parameter tool with complete schema coverage and safety annotations. However, lacking an output schema, the description omits what 'contact signals' actually contain (activity events? profile changes?) which would help the agent determine if this tool meets its information needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description mentions 'LinkedIn profile URL' (specific instance of the contactProfileUrl parameter) and 'pagination' (grouping limit/offset), but adds minimal semantic value beyond the well-documented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('contact signals'), and mentions specific filter types. However, it fails to distinguish from sibling 'contacts-search' or 'contacts-get_signal' (singular), leaving ambiguity about when to use list vs search vs get.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus the sibling 'contacts-search', 'contacts-get_signal', or 'contacts-create_signal'. No prerequisites or conditions are specified.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
contacts-searchAInspect
Search for contacts at a company via LinkedIn Sales Navigator — requires a LinkedIn Sales Navigator connection on the API key owner's account
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds important behavioral context not in annotations: the external dependency on LinkedIn Sales Navigator and the implication that unsuccessful searches may occur without proper connection. Annotations cover idempotency and destructiveness, but the description could further clarify side effects like credit consumption or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence of 15 words. The em-dash effectively separates the core function from the critical prerequisite without redundancy. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations and 100% schema coverage, the description covers the essential prerequisites (LinkedIn Sales Navigator connection). However, it lacks context on return value structure, pagination behavior, or result limits given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured schema sufficiently documents the 6 search fields (keywords, names, jobTitles, etc.). The description provides no additional parameter guidance, which is acceptable given the high schema coverage, meeting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb (Search), resource (contacts at a company), and data source (LinkedIn Sales Navigator). It effectively distinguishes this from sibling tools like contacts-get_research or company_lists-search by specifying the external platform and discovery intent, though it doesn't explicitly name siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a critical prerequisite (requires LinkedIn Sales Navigator connection on the API key owner's account), which guides the agent on authorization requirements. However, it lacks explicit guidance on when to use this tool versus alternatives like contacts-create_research or contact_lists-get_contacts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
credits-get_balanceARead-onlyIdempotentInspect
Get remaining credits balance — returns how many API credits your organization has left in the current billing period
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable behavioral context beyond these annotations by specifying the balance is for the 'current billing period' (temporal constraint) and scoped to 'your organization', which explains the data boundaries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently structured with the action front-loaded ('Get remaining credits balance') followed by an em-dash explanation of return values. Every clause earns its place, concisely explaining both the operation and the return value given the absence of an output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and rich annotations covering safety properties, the description appropriately compensates for the missing output schema by explicitly stating what the tool returns ('how many API credits your organization has left'). The level of detail is sufficient for this low-complexity read operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per the scoring rules. The description appropriately does not discuss parameters since none exist, and no additional semantic clarification is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' with the resource 'credits balance', clearly distinguishing it from siblings which handle lists, signals, and contacts. It further clarifies the scope ('your organization') and temporal context ('current billing period'), making the exact purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool returns but does not explicitly state when to use it versus alternatives or provide prerequisites. Usage is implied by the specific billing domain, but lacks explicit guidance such as 'check before making API calls' or conditions where this is required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getContactResearchByExternalIDARead-onlyIdempotentInspect
Get contact research using an external system's ID (e.g. a HubSpot contact ID) and source name
| Name | Required | Description | Default |
|---|---|---|---|
| externalId | Yes | The external identifier of the contact research request (e.g., HubSpot contact ID) | |
| externalSource | Yes | The source system that provided the external ID (e.g., "hubspot") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the safety profile. The description adds domain-specific context about the external ID lookup pattern but does not disclose additional behavioral traits like error handling (what happens if the external ID is not found) or whether it queries external APIs in real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action ('Get') and includes a concrete example ('e.g. a HubSpot contact ID'). There is no redundant or wasted text; every word contributes to understanding the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter lookup tool with complete schema documentation and comprehensive annotations, the description is appropriately complete. It could be improved by noting error behavior (e.g., 'returns null if external ID not found'), but the essential domain context (external ID lookup) is present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameters are fully documented in the schema itself. The description reinforces the semantics by mentioning 'external system's ID' and 'source name' but does not add significant meaning beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('contact research'), clearly defining the operation. It effectively distinguishes from sibling tool 'contacts-get_research' by specifying 'external system's ID' (e.g., HubSpot), making the unique lookup mechanism immediately apparent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for when to use the tool (when you have an external system's ID) through the HubSpot example, but does not explicitly contrast with alternatives like 'contacts-get_research' or state when NOT to use it (e.g., when you have internal contact IDs).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-create_subscriptionBInspect
Create a market signal subscription to monitor job posts, LinkedIn posts, fundraising, investments, or IPOs
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover the basic safety profile (readOnly=false, destructive=false, idempotent=false). The description adds the scope of monitoring but fails to disclose critical behavioral traits: that this establishes an ongoing polling subscription, that it delivers results periodically to the webhookUrl, or that multiple calls create duplicate subscriptions (though this is implied by 'Create').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the verb, no redundant phrases. The enumeration of signal types is efficient and necessary for clarity. Zero structural waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (webhook delivery, nested filter objects, 5 distinct subscription modes) and lack of output schema, the description is minimal. It omits the recurring delivery mechanism and lifecycle expectations that would help an agent understand the long-running nature of the resource being created.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage including extensive filter documentation per subscription type, the schema carries the semantic weight. The description adds human-readable mappings for the 'type' enum values but doesn't elaborate on other parameters like webhookSecret or intervalSignalLimit beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Create) and resource (market signal subscription), and enumerates the five specific monitoring types (job posts, LinkedIn posts, fundraising, investments, IPOs) that map to the 'type' enum values. This distinguishes it from sibling tools like company_signals-create, though it doesn't explicitly contrast with signal_subscriptions-create.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like signal_subscriptions-create or company_signals-create. It doesn't mention prerequisites (e.g., webhook endpoint readiness) or when to prefer update_subscription versus creating a new one.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-delete_subscriptionCDestructiveIdempotentInspect
Delete a market signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds no behavioral context beyond what annotations provide. It fails to explain idempotency implications (safe to retry), what happens to historical signal data after deletion, or the irreversible nature of the operation despite destructiveHint being true.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at five words with no redundancy. The action is front-loaded. However, the brevity comes at the cost of missing contextual guidance that would help an agent select this over similar tools.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter deletion tool with good safety annotations, but incomplete given the rich sibling ecosystem. Lacks explanation of side effects (cascading deletions, webhook cleanup) and differentiation from pausing or other subscription deletion endpoints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single subscriptionId parameter, the schema carries the full load. The description adds no supplementary context about where to obtain the UUID or validation edge cases, warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Delete') and specific resource ('market signal subscription'), accurately reflecting the tool name prefix. However, it does not distinguish from similar deletion siblings like 'signal_subscriptions-delete' or explain the domain-specific meaning of 'market signal' vs other signal types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use delete versus the available 'pause' and 'resume' siblings, nor any mention of prerequisites (e.g., subscription status) or irreversibility warnings. The agent must infer usage purely from the verb.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-get_subscriptionBRead-onlyIdempotentInspect
Get a market signal subscription by ID
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds nothing about error behavior (e.g., not-found scenarios), return value structure, or side effects beyond the annotations. With rich annotations provided, this minimal addition earns a low score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence front-loaded with action verb. No filler words or redundant phrases. Appropriate length for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for a simple retrieval tool with one parameter and rich annotations. However, lacking an output schema, the description omits what fields/properties the returned subscription contains and doesn't mention error cases (invalid UUID, subscription not found).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed UUID pattern and description. The description mentions 'by ID' but adds no semantic depth regarding the subscriptionId format or constraints beyond what the schema already documents. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States clear verb (Get) and resource (market signal subscription) with scope (by ID). However, it fails to differentiate from sibling tool `signal_subscriptions-get`, which has an ambiguously similar name/purpose but exists in a different namespace.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this single-item retrieval versus `market_signals-list_subscriptions`, nor when to prefer this over `signal_subscriptions-get`. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-list_signalsBRead-onlyIdempotentInspect
List signals delivered by a market signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of signals to return (1-100, default 20) | |
| offset | No | Number of signals to skip for pagination (default 0) | |
| subscriptionId | Yes | The unique identifier of the subscription |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare read-only, idempotent, and non-destructive properties, so the description correctly avoids repeating safety characteristics. It adds minimal behavioral context beyond the annotations, merely stating signals are 'delivered by' a subscription without describing return format, chronological ordering, or pagination behavior. The description does not contradict the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single seven-word sentence with no redundant phrasing or tautology. It efficiently conveys the core operation but may be overly terse given the tool's place in a multi-step workflow requiring a subscription ID from a prior call. The information density is high, though additional context would improve utility without significantly sacrificing brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive input schema (100% coverage) and detailed annotations, the description provides sufficient context for basic invocation. However, it lacks explanation of the output structure (which has no schema), the expected workflow sequence (dependency on subscription creation/listing), and the semantic nature of the signals returned. For a tool with three parameters and pagination capabilities, the description meets minimum viability but leaves gaps in domain context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all three parameters (subscriptionId, limit, offset), clearly documenting types, constraints, and purposes. The description does not add additional semantic context—such as explaining the pagination pattern or that subscriptionId must come from a prior `list_subscriptions` call—beyond what the schema already provides. Baseline score applies since the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List signals delivered by a market signal subscription' clearly identifies the verb (List) and resource (signals). It implicitly distinguishes from siblings like `market_signals-list_subscriptions` (which lists subscription configurations, not signal data) by specifying signals 'delivered by' a subscription. However, it does not explicitly clarify when to use this versus `company_signals-list` or `contacts-list_signals`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to select this tool over alternatives such as `market_signals-list_subscriptions` (for subscription metadata) or `company_signals-list` (for company-specific signals). It omits workflow prerequisites, such as needing to first obtain a subscription ID via `market_signals-list_subscriptions`, and mentions no exclusion criteria or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-list_subscriptionsBRead-onlyIdempotentInspect
List all market signal subscriptions for your organization
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of subscriptions to return (1-100, default 20) | |
| offset | No | Number of subscriptions to skip for pagination (default 0) | |
| includeDeleted | No | Include soft-deleted subscriptions in the response |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds the 'organization' scope constraint, but doesn't explain pagination behavior, soft-deletion semantics, or what constitutes a 'market signal' versus other signal types.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 9 words with no redundancy. Purpose is front-loaded and immediately clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple list operation with good annotations and schema coverage, but incomplete due to missing differentiation from similar subscription-listing tools and lack of guidance on handling paginated results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the parameters are fully documented in the schema. The description mentions none of them, which is acceptable given the high schema coverage, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List) and resource (market signal subscriptions) with organizational scope. However, it fails to distinguish from the sibling tool 'signal_subscriptions-list', leaving ambiguity about which subscription type to query.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus 'signal_subscriptions-list' or other market_signals tools. No mention of pagination strategy for large result sets or when to set includeDeleted=true.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-pause_subscriptionCInspect
Pause a market signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription to pause |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate the operation is non-idempotent (idempotentHint: false) and non-destructive, the description adds no context about what happens on subsequent calls, what 'paused' state implies for data flow/billing, or the implications of openWorldHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief (5 words) and front-loaded, but functions as a tautology restating the tool name. It wastes no words, yet fails to earn its place by adding actionable value beyond labeling.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex sibling landscape with overlapping functionality (e.g., 'subscription_actions-pause', 'market_signals-resume_subscription') and a mutation with non-idempotent behavior, the description is insufficient. It lacks explanation of side effects, return values, or state transitions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter 'subscriptionId' is fully documented in the schema itself. The description adds no additional semantic information (e.g., format details, where to obtain the ID), meriting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear verb ('Pause') and resource ('market signal subscription'), accurately reflecting the tool's function. However, it fails to distinguish from the similar sibling tool 'subscription_actions-pause', which could confuse tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'subscription_actions-pause' or 'signal_subscriptions-stop', nor does it mention prerequisites such as whether the subscription must be active to pause.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-resume_subscriptionAInspect
Resume a paused market signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription to resume |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a non-read-only, non-idempotent mutation (readOnlyHint: false, idempotentHint: false). The description adds context that this is a state transition from paused to active, but does not elaborate on side effects, failure modes if the subscription isn't paused, or billing implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded with the action verb, contains zero redundancy, and is appropriately sized for a simple single-parameter state transition tool. No extraneous information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, 100% schema coverage, clear annotations), the description adequately covers the tool's purpose. It could be improved by explicitly stating the paused-state requirement or referencing the pause sibling tool, but it is sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'subscriptionId' parameter, the schema fully documents the input requirements. The description does not add additional parameter semantics (e.g., where to find the ID), warranting the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Resume' with the resource 'paused market signal subscription', clearly indicating this tool reactivates subscriptions. It effectively distinguishes from the sibling 'market_signals-pause_subscription' by specifying the target state (paused) and action (resume).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a precondition by stating 'paused' subscription, suggesting it should only be used on subscriptions in that state. However, it lacks explicit guidance on when not to use it (e.g., on active subscriptions) or explicit reference to the 'pause_subscription' sibling as the inverse operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-trigger_subscriptionBInspect
Trigger an immediate run of a market signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription to trigger |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds 'immediate run' context not present in annotations, clarifying the action's effect. However, fails to explain implications of idempotentHint=false (multiple calls = multiple runs) or openWorldHint=true (external side effects), which are critical for an action tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with strong verb-first structure and zero redundancy. Appropriately brief for a simple single-parameter tool, though slightly too terse given the complex sibling ecosystem and behavioral nuances.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks return value description (no output schema exists) and omits execution model details (sync/async, job status). Given the non-idempotent nature and similarity to other trigger tools, the description is insufficient for confident invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete parameter description. The tool description doesn't add validation guidance, examples, or semantic constraints beyond what the schema already provides, meeting baseline expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Trigger' and resource 'market signal subscription' to distinguish from CRUD siblings (create/delete/update). However, fails to differentiate from the similarly-named 'signal_subscriptions-trigger' tool, which could cause selection confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus 'signal_subscriptions-trigger' or 'subscription_actions-create'. Missing prerequisites (e.g., subscription state requirements) and excludes mention of idempotency implications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_signals-update_subscriptionCInspect
Update a market signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body | |
| subscriptionId | Yes | The unique identifier of the subscription to update |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds no behavioral context beyond what annotations already provide. While annotations indicate non-idempotency and write access, the description does not clarify partial update semantics (whether omitted fields are preserved or nulled), error handling for invalid subscription IDs, or side effects of updating webhooks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief (four words) with no structural organization. While it contains no filler or redundancy, it is underspecified for a tool with complex nested parameters and multiple configurable fields, falling short of being 'appropriately sized' for the operation's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex nested input schema (7+ configurable fields within __requestBody) and lack of output schema, the description is insufficient. It fails to summarize updatable fields, mention return values, or explain the interaction between filters and prompts (noted in schema as JOB_POSTS only).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all parameters including the nested __requestBody properties (name, prompt, filters, etc.). The description adds no parameter-specific guidance, meeting the baseline score of 3 for high-coverage schemas without additional elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update a market signal subscription' is a tautology that merely restates the tool name (market_signals-update_subscription) in sentence form. It fails to distinguish from siblings like signal_subscriptions-update or market_signals-pause_subscription, and does not specify what aspects of the subscription can be modified.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as signal_subscriptions-update, market_signals-pause_subscription, or market_signals-resume_subscription. There are no stated prerequisites (e.g., requiring the subscriptionId) or conditions for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
organisation-getBRead-onlyIdempotentInspect
Get your organisation profile
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations comprehensively cover safety (readOnly, non-destructive, idempotent), but the description adds no supplementary behavioral context such as caching policies, rate limits, or the structure/content of the returned profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at four words. Front-loaded with the action verb, zero redundancy, and appropriately sized for a no-parameter getter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for tool selection given the simple read-only nature, but lacks description of return values (particularly since no output schema is provided) which would help the agent understand what data is available in the profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, establishing a baseline score of 4. The description appropriately reflects the absence of required inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('organisation profile'), distinguishing it from the sibling 'organisation-update'. However, it lacks specificity about what constitutes an 'organisation profile' (e.g., fields included).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives, or prerequisites for invocation. While it is implied to be the read counterpart to 'organisation-update', this relationship is not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
organisation-updateCIdempotentInspect
Update your organisation profile
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover the basic safety profile (readOnly=false, destructive=false, idempotent=true), but the description adds no behavioral context about partial vs full updates, what happens to unspecified fields, or the response format. It mentions 'profile' without defining scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise at four words with no waste. However, the brevity limits information density—it's too minimal to fully earn its place as the primary documentation for this tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the nested object complexity (description.general, description.products, etc.) and lack of output schema, the description is insufficient. It fails to explain return values, partial update behavior, or provide examples of valid payload structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description does not add meaningful context beyond the schema (e.g., explaining the nested description object fields or the __requestBody wrapper requirement), but meets the minimum threshold.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the verb (update) and resource (organisation profile), but is overly generic. It doesn't specify which profile fields can be updated (name, website, description) or distinguish from sibling 'organisation-get' beyond the operation type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives, prerequisites (e.g., authentication requirements), or when not to use it. The existence of sibling 'organisation-get' is not acknowledged.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_subscriptions-createAInspect
Create a signal subscription — schedules recurring signal execution for a template
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds 'recurring' which clarifies the scheduling nature beyond annotations. However, given annotations already disclose readOnly/destructive hints, the description should further explain the inline vs. template reference behavior and side effects (e.g., immediate first run or not), which it omits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action verb. The em-dash efficiently appends the behavioral clarification without redundancy. No extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the basic template-reference use case, but incomplete given the tool's complexity. The description omits the inline creation capability (using description/qualificationCriteria without signalTemplateId), which is a significant functional gap given the nested object structure and multiple enum options in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description provides conceptual framing ('schedules... for a template') that helps understand the relationship between listId, signalTemplateId, and frequency parameters, but does not specify parameter syntax, formats, or inline creation requirements beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Create') with clear resource ('signal subscription') and scope ('schedules recurring signal execution for a template'). The phrase 'for a template' distinguishes it from signal_templates-create, while 'recurring' distinguishes it from signal_subscriptions-trigger (one-time execution).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this versus signal_subscriptions-update or signal_templates-create. Critically, it fails to mention that the tool supports two mutually exclusive modes: referencing an existing signalTemplateId OR inline creation via description/qualificationCriteria, which is essential for correct invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_subscriptions-getBRead-onlyIdempotentInspect
Get a signal subscription by ID
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description is consistent with annotations (read-only, non-destructive, idempotent) but adds no behavioral context beyond them. It does not disclose what happens when the subscriptionId is not found, whether the data is cached, or what fields are included in the response, which is notable given the lack of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At six words with the verb front-loaded ('Get a signal subscription by ID'), the description is maximally efficient with zero redundancy. Every word serves to identify the operation and resource scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with comprehensive safety annotations, the description meets minimum viability but lacks completeness given the absence of an output schema. It should ideally indicate that it returns subscription details or metadata, and mention 404-like behavior for missing IDs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage already defining subscriptionId as 'The unique identifier of the subscription,' the description adds minimal semantic value by mentioning 'by ID.' It does not provide usage examples, validation details beyond the UUID pattern, or clarify the relationship between this ID and other ID types in the sibling tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get'), resource ('signal subscription'), and lookup method ('by ID'), distinguishing it from sibling operations like signal_subscriptions-list, -create, or -update. However, it does not clarify how this differs from the similar market_signals-get_subscription tool or define what constitutes a 'signal subscription' in this domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as signal_subscriptions-list (for browsing multiple subscriptions) or market_signals-get_subscription. It omits prerequisites, error conditions (e.g., invalid UUID), or recommended usage patterns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_subscriptions-listBRead-onlyIdempotentInspect
List all signal subscriptions for your organization
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of subscriptions to return | |
| offset | No | Number of subscriptions to skip for pagination |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the safety profile. The description adds the organizational scope boundary ('your organization') and implies full enumeration via 'all', but does not disclose pagination limits, rate limiting, or the nature of 'signal subscriptions'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero redundancy. The essential information (action + resource + scope) is front-loaded and immediately comprehensible.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple pagination-based list operation with robust annotations, the description is minimally adequate. However, it omits the return value structure (no output schema exists) and does not explain the pagination behavior implied by the presence of limit/offset parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameters (limit, offset) are fully documented in the JSON schema. The description adds no additional parameter semantics, syntax guidance, or usage examples, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List), resource (signal subscriptions), and scope (your organization). However, it does not explicitly differentiate from the similar sibling `market_signals-list_subscriptions` or clarify when to use this versus `signal_subscriptions-get` for single-record retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like `signal_subscriptions-get` (single record) or `market_signals-list_subscriptions`. It lacks prerequisites, filtering capabilities, or pagination strategy guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_subscriptions-startCInspect
Start a signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription to start |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare idempotentHint=false and openWorldHint=true, the description adds no context about what behavioral consequences occur when starting a subscription (e.g., activating webhooks, initiating real-time data flow, billing implications). It does not explain the effect of calling start on an already-active subscription.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at four words, which prevents waste, but this conciseness manifests as under-specification rather than efficiency. Every sentence earns its place, but there is only one sentence that conveys no information beyond the tool name itself.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of multiple lifecycle siblings (create, stop, trigger, update) and state-changing annotations (idempotent=false), the description is incomplete. It fails to explain the subscription lifecycle state machine, side effects, or what 'starting' entails operationally, leaving critical gaps for an agent selecting between similar tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (the subscriptionId parameter is fully documented in the schema with format, pattern, and description), the description meets the baseline expectation. However, the description text itself adds no parameter context, syntax guidance, or examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Start a signal subscription' is a tautology that restates the tool name (signal_subscriptions-start → 'Start a signal subscription'). It identifies the verb and resource but fails to distinguish this operation from siblings like signal_subscriptions-create, signal_subscriptions-stop, or market_signals-resume_subscription.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It does not clarify whether 'start' applies to newly created subscriptions, stopped ones, or if it differs from 'resume' operations found in sibling tools. No prerequisites or state requirements are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_subscriptions-stopCInspect
Stop a signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription to stop |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate the operation is non-destructive, non-idempotent, and modifies state (readOnlyHint: false), the description adds no context about what happens during stopping—such as whether the subscription can be restarted, if billing ceases immediately, or how this differs from deletion. It relies entirely on annotations for behavioral cues.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single four-word sentence. While technically concise, it underutilizes descriptive space by failing to front-load critical distinctions (stop vs. pause) or behavioral context, resulting in efficient brevity without sufficient information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of numerous sibling tools with overlapping functionality (pause, delete, start, trigger), the description is incomplete. It fails to clarify the stop operation's reversibility, relationship to the 'start' sibling, or side effects, which are essential for correct agent selection among similar tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the subscriptionId parameter is fully documented in the schema. The description adds no additional semantics, examples, or format guidance beyond the schema's UUID pattern and description, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Stop a signal subscription' essentially restates the tool name (signal_subscriptions-stop) without providing differentiation from siblings like signal_subscriptions-pause, market_signals-delete_subscription, or signal_subscriptions-start. It identifies the verb and resource but fails to clarify scope or distinguish between termination and pausing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use 'stop' versus alternatives like 'pause' (temporary halt) or 'delete' (permanent removal), nor are prerequisites mentioned (e.g., subscription must be active). The description lacks explicit when-to-use or when-not-to-use criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_subscriptions-triggerBInspect
Trigger an immediate run of a signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| subscriptionId | Yes | The unique identifier of the subscription to trigger |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-idempotent, non-destructive action with open-world effects. The description adds the temporal context ('immediate') but fails to disclose what the 'run' actually does (e.g., whether it fetches data, generates signals, or runs asynchronously) or any side effects beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence with zero wasted words. It is appropriately front-loaded with the action verb and maintains focus on the core functionality without unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single UUID parameter) and absence of an output schema, the description is minimally adequate. However, for a tool in a complex domain with many siblings, it lacks context about what the triggered run produces or whether the operation is synchronous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameter 'subscriptionId' is fully documented in the schema itself ('The unique identifier of the subscription to trigger'). The description adds no additional parameter semantics, syntax, or format details beyond what the schema provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Trigger') and resource ('signal subscription') and clarifies the scope with 'immediate run'. However, it does not explicitly differentiate from similar lifecycle siblings like 'signal_subscriptions-start' (which likely activates the subscription) or 'market_signals-trigger_subscription'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not indicate that this is for immediate/one-time execution versus scheduled runs, nor does it clarify the distinction between 'triggering' (running) and 'starting' (activating) a subscription.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_subscriptions-updateCIdempotentInspect
Update a signal subscription
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body | |
| subscriptionId | Yes | The unique identifier of the subscription to update |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate idempotentHint=true and destructiveHint=false, but the description adds no behavioral context beyond these hints. It does not explain what happens when fields are updated (e.g., whether the subscription restarts, if changes are immediate, or what openWorldHint implies for this specific tool).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At four words, the description is technically brief, but this represents under-specification rather than effective conciseness. No information is front-loaded because no substantive information is present beyond the tool name restatement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex configuration tool with 10+ nested parameters (frequency, answerType, cronExpression, qualificationCriteria, etc.) and no output schema, the description is inadequate. While annotations cover basic safety properties (idempotent, non-destructive), the description omits domain context, update scope, and operational implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for fields like name, question, timezone, and outputSchema. The description adds no parameter-specific guidance, but the baseline score of 3 is appropriate given the schema's completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update a signal subscription' is essentially a tautology that restates the tool name with spaces added. While it confirms the resource type (signal subscription) and action (update), it fails to distinguish this tool from siblings like signal_subscriptions-create, signal_subscriptions-start, or signal_subscriptions-trigger, or explain what constitutes a signal subscription.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., create vs update, or whether to stop/start before updating). No mention of prerequisites like needing an existing subscription ID, or whether this performs a partial patch or full replacement of the configuration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_summaries-generateBInspect
Generate an AI summary consolidating insights from all completed company signals for a domain
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds domain context by specifying the data source ('completed company signals') and the nature of the output ('AI summary'). However, it fails to address behavioral traits implied by annotations, such as the idempotentHint: false (meaning multiple calls create distinct summaries) or openWorldHint: true, and does not clarify whether the summary is returned immediately or stored for later retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action verb. It contains no redundant or wasted words. However, given the complexity of the operation (AI generation with side effects), the extreme brevity slightly limits completeness, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers the input requirement but remains silent on output behavior, which is notable given the lack of an output schema. Given that annotations indicate this is a non-idempotent write operation (idempotentHint: false), the description should clarify whether the generated summary is returned in the response or stored for later access via the list sibling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score applies. The description mentions 'for a domain' which aligns with the single 'domain' parameter, but adds no additional semantic context, validation guidance, or format details beyond what the schema's regex pattern and example already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Generate') and clearly identifies the resource ('AI summary consolidating insights from all completed company signals') and scope ('for a domain'). It implicitly distinguishes from the sibling 'signal_summaries-list' through the generative verb, though it does not explicitly contrast the two tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling 'signal_summaries-list', nor does it mention prerequisites (e.g., requiring completed company signals to exist) or when not to use the tool. It lacks explicit 'when-to-use' context entirely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_summaries-listBRead-onlyIdempotentInspect
List all AI-generated signal summaries for a domain, ordered by creation date (latest first)
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results per page | |
| domain | Yes | Filter summaries by company domain (e.g., "acme.com") | |
| offset | No | Number of results to skip for pagination |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint, destructiveHint) and reliability (idempotentHint). The description adds valuable behavioral context regarding result ordering ('ordered by creation date (latest first)'), but does not elaborate on pagination behavior, error cases, or the content structure of the summaries.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no redundant words. Key information (action, resource, filter scope, and sort order) is front-loaded and immediately clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple list operation, good annotations covering the safety profile, and lack of output schema, the description adequately covers the essential behavior. It appropriately omits return value details that would belong in an output schema, though it could briefly acknowledge pagination via limit/offset.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all parameters (domain, limit, offset). The description mentions 'for a domain' which aligns with the required parameter, but adds no additional semantic detail, examples, or format constraints beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List), resource (AI-generated signal summaries), and scope (for a domain). It implicitly distinguishes from sibling 'signal_summaries-generate' by specifying the read operation, though it does not explicitly reference sibling alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'signal_summaries-generate' or 'company_signals-list', nor does it mention prerequisites or conditions where this tool should be avoided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_templates-createAInspect
Create a reusable signal template — templates define standard research questions that can be applied to many companies in batch
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (non-destructive write, non-idempotent). The description adds domain context explaining templates define 'standard research questions,' which helps an agent understand the abstraction. It doesn't address idempotency behavior, conflict resolution, or return values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently front-loaded with the verb 'Create' and uses an em-dash to append definitional context without waste. Every word earns its place; no redundancy with schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the nested object complexity and rich schema (7 fields including enums and JSON schemas), the description adequately explains the domain concept (templates as reusable research questions). Lacks return value description, but annotations and 100% schema coverage reduce the burden.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description provides conceptual framing (templates for research questions) but doesn't elaborate on specific parameters like qualificationCriteria, answerType enums, or the outputSchema requirement beyond what the schema documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('reusable signal template'), and distinguishes from individual company signal tools by mentioning 'batch' application. However, it doesn't explicitly name sibling alternatives like company_signals-create to fully clarify selection criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'reusable' and 'batch' keywords, suggesting when to use templates versus one-off signals. However, it lacks explicit when-not guidance or direct references to update/delete siblings for workflow context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_templates-deleteADestructiveIdempotentInspect
Soft-delete a signal template — marks it as deleted but preserves it for historical tracking
| Name | Required | Description | Default |
|---|---|---|---|
| templateId | Yes | The unique identifier of the template to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds crucial behavioral context beyond the annotations: it clarifies that despite the destructiveHint=true annotation, this is a soft-delete that marks the resource as deleted while preserving data for historical tracking. This explains the actual state change behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. The em-dash construction front-loads the action ('Soft-delete a signal template') and follows with the essential behavioral clarification.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single UUID parameter) and rich annotations covering idempotency and destructiveness, the description adequately covers the critical domain-specific behavior (soft-delete semantics). Minor gap: no mention of return value structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('The unique identifier of the template to delete'), the schema fully documents the single templateId parameter. The description does not add parameter-specific semantics, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb phrase 'Soft-delete' combined with the resource 'signal template', clearly distinguishing this from sibling operations like create, get, list, and update. The 'soft-delete' modifier specifically differentiates this from a hard-delete alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by noting the operation 'preserves it for historical tracking', suggesting when to use this (when history retention is needed). However, it lacks explicit 'when-not-to-use' guidance or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_templates-getBRead-onlyIdempotentInspect
Get a signal template by ID
| Name | Required | Description | Default |
|---|---|---|---|
| templateId | Yes | The unique identifier of the template |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, idempotent, and non-destructive traits, lowering the bar. Description adds no behavioral context beyond the operation name (e.g., doesn't mention caching, rate limits, or 404 behavior), but doesn't contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely brief (5 words) with no redundancy. However, borders on under-specification—one more clause explaining the return value or distinguishing from 'list' would improve utility without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple single-parameter getter with rich annotations, but lacks description of the return payload (no output schema exists) and error handling scenarios that would help an agent interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with 'templateId' well-documented. Description mentions 'by ID' which loosely maps to the parameter, but adds no format guidance or examples beyond the schema's regex pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' with resource 'signal template' and scope 'by ID'. Distinguishes from sibling 'signal_templates-list' by specifying ID-based retrieval, though it doesn't explain what a signal template conceptually represents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus 'signal_templates-list' or other siblings. No mention of prerequisites (e.g., having the UUID) or error cases (e.g., template not found).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_templates-listBRead-onlyIdempotentInspect
List all signal templates for your organization
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of templates to return | |
| offset | No | Number of templates to skip for pagination | |
| includeDeleted | No | Include deleted templates in the response |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish the operation as read-only, idempotent, and non-destructive. The description adds valuable organizational scope ('for your organization'), indicating tenancy boundaries. However, it fails to disclose behavioral specifics not covered by annotations, such as pagination patterns, default page sizes, or the soft-delete filtering behavior available via parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is extremely concise at seven words, with no redundancy or filler. Every word contributes essential information (action, resource, scope). However, the brevity comes at the cost of omitting important operational context like pagination, keeping it from being maximally useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list operation with three optional filtering/pagination parameters and no output schema, the description meets minimum viability by identifying the resource and action. However, it lacks completeness regarding the tool's full capabilities—specifically pagination behavior and soft-delete querying—which are critical for effective agent utilization of list-style endpoints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for all three parameters (limit, offset, includeDeleted), the structured schema carries the full semantic load. The description text adds no parameter-specific context, syntax guidance, or usage examples, meeting the baseline expectation for high-coverage schemas without adding supplementary value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('List') and resource ('signal templates'), with organizational scope ('for your organization'). The phrase 'List all' implicitly distinguishes this bulk retrieval operation from the sibling 'signal_templates-get' (single retrieval), though it does not explicitly clarify this distinction or mention the soft-delete filtering capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'signal_templates-get' for single-template retrieval. It omits any mention of pagination strategy (offset/limit parameters) or when to enable 'includeDeleted' for soft-deleted templates, leaving usage context entirely to the agent's inference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
signal_templates-updateAInspect
Update a signal template — creates a new version while preserving the template ID; omitted fields retain their previous values
| Name | Required | Description | Default |
|---|---|---|---|
| templateId | Yes | The unique identifier of the template to update | |
| __requestBody | Yes | Request body |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate this is a non-read-only operation (readOnlyHint: false), the description adds valuable behavioral context not in annotations: specifically the versioning model ('creates a new version') and PATCH-like partial update semantics ('omitted fields retain previous values'). It does not mention idempotency or rate limits, but the versioning hint helps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly concise single sentence (or em-dash connected clause) with zero waste. Front-loaded with the action, immediately followed by the critical behavioral distinctions (versioning, partial updates). Every phrase earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex update operation with nested objects and no output schema, the description adequately covers the key behavioral quirks (versioning, partial updates). It leverages the comprehensive schema and annotations well, though it could explicitly note the non-idempotent nature given the idempotentHint: false annotation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds meaning beyond the schema by explaining the partial-update semantics for the __requestBody parameter ('omitted fields retain their previous values'), which clarifies how to populate the nested object—something the raw schema properties don't explicitly state.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb (Update) and resource (signal template), and crucially distinguishes this from signal_templates-create by clarifying it 'creates a new version while preserving the template ID'—making the scope and sibling differentiation crystal clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear behavioral context that implies usage (partial updates via 'omitted fields retain their previous values' and versioning semantics), but does not explicitly name sibling alternatives like signal_templates-create or state when NOT to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
subscription_actions-createCInspect
Create a subscription action
| Name | Required | Description | Default |
|---|---|---|---|
| __requestBody | Yes | Request body | |
| subscriptionId | Yes | The subscription to add the action to |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate this is a non-destructive, non-idempotent write operation, the description adds no behavioral context about what happens upon invocation (e.g., that it configures webhook endpoints to receive notifications, or what the openWorldHint implies for this specific domain).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Though brief (4 words), this represents under-specification rather than efficient conciseness. The single sentence provides no actionable information beyond the tool name itself.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool configuring webhook destinations on subscriptions (evidenced by nested webhook objects and URI formats), the description fails to mention webhooks, notification behavior, or the relationship between actions and subscriptions. Inadequate for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for subscriptionId and webhook configuration. The description itself adds no parameter-specific guidance, meeting the baseline expectation when the schema is self-documenting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a subscription action' is tautological, merely restating the tool name in sentence form. It fails to specify what a subscription action actually does (configure webhooks) or distinguish this tool from siblings like subscription_actions-update or subscription_actions-pause.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., when to create a new action vs. updating an existing one), nor any prerequisites such as requiring an existing subscription ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
subscription_actions-deleteCDestructiveIdempotentInspect
Delete a subscription action
| Name | Required | Description | Default |
|---|---|---|---|
| actionId | Yes | The action ID to delete | |
| subscriptionId | Yes | The subscription that owns the action |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare destructiveHint=true, idempotentHint=true, and openWorldHint=true, the description adds no behavioral context beyond the single word 'Delete'. It does not clarify what 'openWorld' implies for this operation, whether deletion is recoverable, or what side effects occur on the parent subscription.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief (4 words) and front-loaded, but verges on under-specification rather than efficient conciseness. While no words are wasted, the extreme brevity fails to meet the structural needs of a destructive operation with sibling alternatives.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of sibling lifecycle tools (pause/unpause/update) and the destructive nature of the operation, the description is incomplete. It lacks explanation of the subscription/action hierarchy, recovery options, or return behavior (though no output schema exists, the operation's effects should be described).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('The action ID to delete', 'The subscription that owns the action'), the schema fully documents parameters. The description adds no parameter semantics, but the baseline score of 3 applies when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a subscription action' is tautological, merely converting the tool name 'subscription_actions-delete' into sentence form without adding specificity. It fails to distinguish from sibling tools like subscription_actions-pause or subscription_actions-unpause, leaving ambiguity about when permanent deletion is preferred over pausing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., pause/unpause), nor any mention of prerequisites such as subscription state or ownership verification. The description offers no 'when-to-use' or 'when-not-to-use' context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
subscription_actions-getCRead-onlyIdempotentInspect
Get a subscription action by ID
| Name | Required | Description | Default |
|---|---|---|---|
| actionId | Yes | The action ID | |
| subscriptionId | Yes | The subscription that owns the action |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds no context about what a 'subscription action' represents, what fields are returned, or any business logic constraints beyond the basic identifier.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single six-word sentence with no redundancy. However, given the absence of an output schema, the description may be excessively minimal rather than appropriately concise, as it provides no hint about return values or payload structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, yet the description does not explain what data is returned. It lacks domain context about what constitutes a subscription action and omits any mention of the parent-child relationship between subscriptions and actions that the parameter schema implies.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('The action ID', 'The subscription that owns the action'), establishing baseline adequacy. The description mentions 'by ID' implying the actionId parameter but does not add meaning for subscriptionId or explain the hierarchical relationship between the two identifiers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Get') and resource ('subscription action'), with 'by ID' indicating single-record retrieval. However, it does not differentiate from sibling operations like subscription_actions-create or subscription_actions-update.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this retrieval tool versus subscription_actions-list or other alternatives. No mention of prerequisites for obtaining valid subscriptionId or actionId values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
subscription_actions-listCRead-onlyIdempotentInspect
List all actions for a subscription
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of actions to return | |
| offset | No | Number of actions to skip for pagination | |
| subscriptionId | Yes | The subscription whose actions to list |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds minimal behavioral context beyond annotations—does not clarify pagination behavior (despite limit/offset parameters), sorting order, or what constitutes an 'action' in this domain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at six words. No wasted sentences or redundancy. However, the brevity borders on underspecification—slightly more detail about the scope of 'actions' would improve utility without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for a standard CRUD list operation. The input schema is fully documented, compensating for the terse description. No output schema exists, but the description does not attempt to document return values or pagination cursor behavior, leaving a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, documenting limit, offset, and subscriptionId. The description implies the subscriptionId parameter ('for a subscription') but adds no syntax details, format examples, or semantic meaning beyond the schema definitions. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('actions for a subscription'). Distinguishes from sibling tools like subscription_actions-get, -create, and -delete by specifying the list operation. However, lacks domain context about what 'actions' represent (e.g., scheduled tasks, webhooks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus subscription_actions-get (which likely retrieves a single action) or other sibling tools. No mention of prerequisites like needing a valid subscriptionId first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
subscription_actions-pauseCInspect
Pause a subscription action
| Name | Required | Description | Default |
|---|---|---|---|
| actionId | Yes | The action ID to pause | |
| subscriptionId | Yes | The subscription that owns the action |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-idempotent write operation (readOnly=false, idempotent=false), but the description adds no context about what happens during pausing (immediate halt vs. graceful stop), whether the action can be resumed, or side effects. It relies entirely on annotations for behavioral safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four words and structurally compact. While not verbose, the sentence fails to earn its place due to under-specification rather than being inappropriately sized. It is front-loaded but contains minimal content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the domain complexity (managing subscription lifecycle with hierarchical action ownership), the description is incomplete. It omits the relationship to the unpause sibling tool, doesn't explain the openWorldHint implications, and leaves the pause behavior undefined despite the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with both parameters documented ('The action ID to pause', 'The subscription that owns the action'). The description adds no additional semantic value beyond the schema, so it meets the baseline of 3 for well-documented schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Pause a subscription action' is essentially a tautology that restates the tool name. While it identifies the verb (pause) and resource (subscription action), it fails to distinguish from siblings like subscription_actions-unpause or market_signals-pause_subscription, and doesn't explain what 'pausing' means in this domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives such as subscription_actions-delete (permanent removal) or subscription_actions-unpause (resumption). No prerequisites or conditions mentioned, despite the hierarchical relationship between subscriptionId and actionId implied by the schema.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
subscription_actions-unpauseCInspect
Unpause a subscription action
| Name | Required | Description | Default |
|---|---|---|---|
| actionId | Yes | The action ID to unpause | |
| subscriptionId | Yes | The subscription that owns the action |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate this is a non-read-only, non-destructive, non-idempotent operation, the description adds no context about what unpausing entails (immediate execution vs. rescheduling), side effects, or failure modes when targeting an already-unpaused action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At three words, the description is technically concise, but it is under-specified rather than efficiently front-loaded. The single sentence merely echoes the tool name without conveying actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a state-mutation tool with no output schema, the description is inadequate. It lacks explanation of the mutation's effects, success indicators, or interaction with the broader subscription lifecycle despite the presence of multiple related sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input parameters are fully documented in the JSON schema (subscriptionId and actionId). The description adds no additional semantic information, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Unpause a subscription action' is a tautology that restates the tool name (subscription_actions-unpause). It fails to distinguish this tool from its sibling subscription_actions-pause or clarify what constitutes a 'subscription action' in this domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus subscription_actions-pause or other subscription management tools. No mention of prerequisites (e.g., whether the action must be in a paused state) or expected workflow sequences.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
subscription_actions-updateCIdempotentInspect
Update a subscription action
| Name | Required | Description | Default |
|---|---|---|---|
| actionId | Yes | The action ID to update | |
| __requestBody | Yes | Request body | |
| subscriptionId | Yes | The subscription that owns the action |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations correctly indicate idempotent, non-destructive write behavior, the description adds no context about what gets updated (specifically webhook destination URLs) or the operational impact. It fails to leverage the structured hints to explain that this modifies configuration data rather than state flags.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief at four words, avoiding bloat, but suffers from under-specification rather than effective conciseness. It is front-loaded but fails to earn its place by providing actionable information beyond the tool name itself.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the nested webhook configuration object and the presence of numerous sibling lifecycle tools, the description is inadequate. It omits critical context that this specifically updates webhook destinations and does not address the lack of output schema or explain the openWorldHint implications for extensible request bodies.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents the subscriptionId, actionId, and __requestBody parameters including the nested webhook structure. The description adds no parameter-specific guidance, but the high schema coverage establishes a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update a subscription action' is a tautology that restates the tool name with minimal modification. It fails to specify what aspects of a subscription action are updated (webhook configurations) and does not distinguish this modification tool from sibling state-change tools like subscription_actions-pause or subscription_actions-unpause.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided regarding when to use this tool versus siblings such as pause/unpause (which change active state) or create/delete (which manage lifecycle). There is no mention of prerequisites, required permissions, or conditions where this update operation would be inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!