Skip to main content
Glama

Utilify

Server Details

AI agents compare and sign up for Texas utility plans: electricity, internet, gas, water, trash.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
utilify-io/utilify-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 8 of 8 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as search_utility_providers for finding options and initiate_signup for enrollment, but get_provider_details and compare_providers could be confused as both involve provider information, though their descriptions clarify one is for single-provider details and the other for comparisons.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case, such as capture_solar_interest and check_signup_status, making them predictable and easy to understand across the set.

Tool Count5/5

With 8 tools, the set is well-scoped for a utility provider management server, covering key actions like search, comparison, signup, and status checks without being overwhelming or too sparse.

Completeness4/5

The tools cover core utility management workflows, including search, comparison, signup, and status tracking, with minor gaps such as no explicit tool for canceling or modifying existing utility services, but agents can likely work around this.

Available Tools

8 tools
capture_solar_interestAInspect

Record a homeowner's interest in rooftop solar for follow-up with a licensed Texas solar installer. Returns a lead ID and confirms next steps. Only use this when the user has explicitly opted in and has confirmed they own (or will own) the home. Utilify routes leads to installer partners (SunPower, Sunrun, Palmetto, and independent TX installers) and may earn a referral fee.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNoHomeowner email. Either email or phone is required so the installer can reach out.
phoneNoHomeowner phone. Either email or phone is required so the installer can reach out.
addressYesFull service address including city, state, and ZIP code
last_nameNoHomeowner last name
first_nameNoHomeowner first name
session_idNoOptional agent session ID for attribution tracking
move_in_dateNoISO 8601 move-in date, if applicable
interest_levelNoHow close to decision. 'curious' = may consider later; 'researching' = actively comparing; 'ready' = wants quotes now.
estimated_monthly_billNoCurrent or expected monthly electricity bill in USD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the openWorldHint annotation. It discloses that the tool 'returns a lead ID and confirms next steps,' describes the referral model ('Utilify routes leads to installer partners... and may earn a referral fee'), and specifies the geographic scope ('Texas solar installer'). However, it doesn't mention rate limits, authentication requirements, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences that each serve distinct purposes: stating the tool's function, specifying usage conditions, and explaining the business model. There's no redundant information, and the most critical guidance ('Only use this when...') is appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema and only openWorldHint annotation, the description provides good contextual completeness. It explains what the tool returns (lead ID and next steps), the business context (referral routing), and usage constraints. However, it doesn't detail error cases or response formats, leaving some gaps for the agent to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 9 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation but doesn't provide additional semantic context about how parameters interact or affect outcomes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Record a homeowner's interest in rooftop solar') and resource ('for follow-up with a licensed Texas solar installer'), distinguishing it from sibling tools like 'check_signup_status' or 'compare_providers' which serve different functions. It explicitly mentions the referral fee aspect, which further clarifies its business purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Only use this when the user has explicitly opted in and has confirmed they own (or will own) the home.' This clearly defines when to use the tool and implicitly excludes scenarios where these conditions aren't met, offering strong guidance for the AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_signup_statusCInspect

Check the current status of a utility signup previously initiated through Utilify.

ParametersJSON Schema
NameRequiredDescriptionDefault
signup_idYesThe signup ID returned by initiate_signup
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool checks status, implying a read-only operation, but doesn't disclose key traits like authentication needs, rate limits, error handling, or what the status response includes. For a tool with no annotations, this is insufficient to inform safe and effective use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without redundancy. It's appropriately sized for a simple tool and front-loaded with essential information, making it easy to parse quickly. Every word earns its place, with no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema, no annotations), the description is incomplete. It lacks details on what the status check returns, potential outcomes, error conditions, or integration with sibling tools. For a tool that likely returns structured status data, this omission leaves the agent without enough context to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter semantics beyond what the input schema provides. With 100% schema description coverage, the schema already documents the single required parameter 'signup_id' and its purpose. The description doesn't elaborate on format, validation, or examples, so it meets the baseline for high schema coverage but adds no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check the current status of a utility signup previously initiated through Utilify.' It specifies the verb ('check'), resource ('utility signup'), and scope ('previously initiated'), though it doesn't explicitly differentiate from sibling tools like 'initiate_signup' beyond the temporal aspect. This makes it clear but not fully sibling-distinctive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance: it implies usage for checking status after initiation, but offers no explicit when-to-use rules, exclusions, or alternatives. For example, it doesn't clarify if this should be used instead of other tools for status queries or what prerequisites exist beyond having a signup ID. This leaves significant gaps in usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_providersBInspect

Compare two or more utility providers side by side. Returns structured comparison across price, contract terms, features, and ratings.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull street address for plan availability
provider_slugsYesProvider slugs to compare (2-5)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the tool returns structured comparison data but doesn't disclose behavioral traits like whether it's read-only (implied by 'compare'), error conditions, rate limits, authentication needs, or data freshness. For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose and scope, the second specifies the comparison dimensions. It's appropriately sized and front-loaded, making every sentence earn its place without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with full schema coverage and no output schema, the description is moderately complete. It covers the tool's purpose and output structure but lacks details on behavioral context (e.g., errors, limits) and doesn't fully compensate for missing annotations. For a comparison tool with no output schema, more guidance on return format would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('provider_slugs' and 'address') adequately. The description adds minimal value beyond the schema, mentioning 'two or more' providers (implied by schema's minItems=2) and 'side by side' comparison, but no additional syntax or format details. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare two or more utility providers side by side' specifies the verb (compare) and resource (utility providers). It distinguishes from siblings like 'get_provider_details' (single provider) and 'search_utility_providers' (searching rather than comparison), though not explicitly named. The scope 'side by side' and comparison dimensions (price, contract terms, features, ratings) add specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context: comparing multiple providers (2-5 per schema) for a given address. However, it doesn't explicitly state when to use this vs. alternatives like 'get_provider_details' for single providers or 'search_utility_providers' for discovery. No exclusions or prerequisites are mentioned, leaving gaps in guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_move_checklistAInspect

Generate a personalized utility setup checklist based on move-in address and date. Tracks what has been set up vs. what still needs attention. Pass tenancy='rent' or tenancy='own' for tenant/owner-specific advisories (e.g., landlord-handled water/trash for renters, solar-interest capture for buyers).

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull move-in address including city, state, and ZIP code
tenancyNoWhether the user is renting or buying their new place. Shapes the checklist and unlocks homeowner cross-sells.
move_dateNoISO 8601 date for the planned move-in
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses behavioral traits such as generating a checklist and tracking setup status, but does not mention potential limitations like data sources, update frequency, or error handling. It adds some context but lacks depth for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two concise sentences that are front-loaded with the main purpose. Every sentence earns its place by clearly stating the tool's function and tracking capability without unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is adequate but incomplete. It explains the purpose and tracking feature but does not cover return values, error cases, or integration with sibling tools, leaving gaps for an AI agent to infer behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('address' and 'move_date') with details. The description adds marginal value by implying these parameters are used for personalization, but does not provide additional syntax or format details beyond what the schema specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('generate', 'tracks') and resource ('personalized utility setup checklist'), distinguishing it from sibling tools like 'check_signup_status' or 'compare_providers' which focus on different aspects of utility management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'move-in address and date' but does not explicitly state when to use this tool versus alternatives like 'search_utility_providers' or 'initiate_signup'. It provides basic context but lacks explicit guidance on exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_promotionsCInspect

Get current promotions, deals, coupons, and affiliate offers for utility providers at a given address.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull street address including city, state, and ZIP code
utility_typesNoFilter promotions by utility type
provider_slugsNoFilter promotions by specific provider slugs
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves promotions but lacks details on permissions required, rate limits, data freshness, or response format. This leaves significant gaps in understanding how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the core purpose without unnecessary words. It's front-loaded with the main action and resources, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description is insufficient. It doesn't explain what the return data looks like (e.g., list format, fields included), error conditions, or behavioral constraints like authentication needs or rate limits, leaving the agent with incomplete operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds marginal value by implying the address is used to locate utility providers, but doesn't provide additional context beyond what's in the schema, such as format examples or usage tips.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and target resources ('current promotions, deals, coupons, and affiliate offers for utility providers'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'compare_providers' or 'search_utility_providers' that might also involve promotional information, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance by specifying 'at a given address,' but offers no explicit advice on when to use this tool versus alternatives like 'compare_providers' or 'search_utility_providers.' There's no mention of prerequisites, exclusions, or optimal scenarios for its use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_provider_detailsBInspect

Get detailed information about a specific utility provider including available plans, pricing, contract terms, and signup requirements.

ParametersJSON Schema
NameRequiredDescriptionDefault
zip_codeNoZIP code to check plan availability and pricing
provider_slugYesThe provider slug identifier
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes a read operation ('Get'), but doesn't disclose behavioral traits such as authentication needs, rate limits, error conditions, or what happens if the provider_slug is invalid. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and lists key information types. It avoids unnecessary words, though it could be slightly more structured (e.g., separating core function from details).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It doesn't explain return values, error handling, or behavioral constraints, which are crucial for a tool that retrieves detailed information. The description alone is inadequate for safe and effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds context by implying that zip_code affects plan availability and pricing, but doesn't provide additional syntax or format details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('detailed information about a specific utility provider'), specifying what information is retrieved (plans, pricing, terms, requirements). It distinguishes from siblings like 'search_utility_providers' (which likely lists providers) by focusing on details for one provider, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when detailed provider information is needed, but doesn't explicitly state when to use this tool versus alternatives like 'compare_providers' or 'search_utility_providers'. No exclusions or prerequisites are mentioned, leaving some ambiguity about context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initiate_signupA
Destructive
Inspect

Start the signup/enrollment process with a utility provider. Returns a signup URL, phone number, or begins API enrollment. The user must confirm before calling this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull service address including city, state, and ZIP code
plan_idYesThe specific plan to enroll in
session_idNoOptional session ID for tracking
provider_idYesThe provider to sign up with
move_in_dateYesISO 8601 date for desired service start
customer_infoYesCustomer contact information
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the destructiveHint annotation. It discloses that the tool 'Returns a signup URL, phone number, or begins API enrollment,' clarifying possible outcomes. This compensates for the lack of output schema. The annotation indicates destructiveHint=true, and the description aligns by describing an initiation action without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that are front-loaded: the first states the purpose and return values, the second adds a usage prerequisite. There's minimal waste, though it could be slightly more structured (e.g., separating return types from purpose).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, destructiveHint annotation, no output schema), the description is reasonably complete. It clarifies the tool's purpose, return values, and a prerequisite, addressing gaps from missing output schema. However, it could improve by detailing error cases or integration specifics for a signup process.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description adds no parameter-specific information beyond what's in the schema. It mentions the tool's purpose but doesn't explain parameter meanings or relationships, resulting in a baseline score of 3 as the schema carries the burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Start the signup/enrollment process with a utility provider.' It specifies the verb ('start') and resource ('signup/enrollment process'), distinguishing it from siblings like check_signup_status or compare_providers. However, it doesn't explicitly differentiate from all siblings (e.g., get_move_checklist might be related but not clearly contrasted).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage guidance: 'The user must confirm before calling this tool.' This implies a prerequisite but doesn't specify when to use this tool versus alternatives like check_signup_status or get_provider_details. It lacks explicit when/when-not scenarios or named alternatives, leaving usage context partially implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_utility_providersAInspect

Search for available utility providers (electricity, gas, internet, water, trash, security) at a specific address. Returns providers with basic info, a classified plan type (fixed / free_nights / solar_buyback / 100_renewable / etc.), and whether the cheapest plan is rental-friendly. Pass tenancy='rent' to prefer short-contract plans; pass tenancy='own' to surface solar-buyback options and a solar-interest capture offer.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull street address including city, state, and ZIP code
tenancyNoWhether the user is renting or buying. Changes plan preferences and enables homeowner-only cross-sells (solar).
utility_typesNoFilter by utility types. If omitted, returns all available types.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the tool's behavior by specifying it 'returns providers with basic info and availability status', which helps set expectations. However, it lacks details on error handling, rate limits, authentication needs, or whether results are cached/real-time, leaving gaps for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds essential details in the second. Every sentence earns its place by specifying the resource, action, and output without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 2 parameters, 100% schema coverage, and no output schema, the description adequately covers the purpose and output format. However, without annotations or an output schema, it lacks details on response structure (e.g., pagination, error cases) and behavioral constraints, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds value by clarifying the utility types scope and that omitting 'utility_types' returns all types, which complements the schema's 'If omitted' note. This extra context justifies a score above the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search for available utility providers') and resource ('at a specific address'), listing the exact utility types covered (electricity, gas, internet, water, trash, security). It distinguishes from siblings by focusing on address-based provider discovery rather than status checks, comparisons, or signup processes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'at a specific address' and the utility types list, suggesting this is for finding providers for a location. However, it doesn't explicitly state when to use this versus alternatives like 'compare_providers' or 'get_provider_details', nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.