Utilify
Server Details
AI agents compare and sign up for Texas utility plans (electricity, internet, gas, water, trash) at any ZIP code via MCP. No auth. (more US states coming)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 8 of 8 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose with no overlap: capture_solar_interest records leads, check_signup_status monitors enrollments, compare_providers offers side-by-side analysis, get_move_checklist generates setup tasks, get_promotions finds deals, get_provider_details gives plan specifics, initiate_signup starts enrollments, and search_utility_providers lists available options. The descriptions reinforce unique functions, eliminating ambiguity.
All tool names follow a consistent verb_noun pattern (e.g., capture_solar_interest, check_signup_status, compare_providers), using snake_case throughout. The verbs are descriptive and aligned with each tool's action, making the set predictable and easy to navigate.
With 8 tools, the count is well-scoped for a utility provider management server, covering key operations like search, comparison, signup, and support tasks. Each tool serves a specific role without redundancy, fitting typical domain needs without being too sparse or bloated.
The tool set provides comprehensive coverage for utility provider interactions, including search, comparison, signup initiation, status checks, promotions, and move-in support. A minor gap exists in lacking tools for updating or canceling signups, but agents can work around this using existing tools like check_signup_status for monitoring.
Available Tools
8 toolscapture_solar_interestAInspect
Record a homeowner's interest in rooftop solar for follow-up with a licensed Texas solar installer. Returns a lead ID and confirms next steps. Only use this when the user has explicitly opted in and has confirmed they own (or will own) the home. Utilify routes leads to installer partners (SunPower, Sunrun, Palmetto, and independent TX installers) and may earn a referral fee.
| Name | Required | Description | Default |
|---|---|---|---|
| No | Homeowner email. Either email or phone is required so the installer can reach out. | ||
| phone | No | Homeowner phone. Either email or phone is required so the installer can reach out. | |
| address | Yes | Full service address including city, state, and ZIP code | |
| last_name | No | Homeowner last name | |
| first_name | No | Homeowner first name | |
| session_id | No | Optional agent session ID for attribution tracking | |
| move_in_date | No | ISO 8601 move-in date, if applicable | |
| interest_level | No | How close to decision. 'curious' = may consider later; 'researching' = actively comparing; 'ready' = wants quotes now. | |
| estimated_monthly_bill | No | Current or expected monthly electricity bill in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the 'openWorldHint' annotation. It discloses that the tool returns a lead ID and confirms next steps, describes the referral fee mechanism, and specifies that leads are routed to installer partners. While it doesn't mention rate limits or authentication needs, it provides meaningful operational details that the annotation alone doesn't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that each serve distinct purposes: the first states the core function and output, the second provides critical usage constraints and business context. There's no wasted language, and important information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a lead capture tool with comprehensive schema coverage but no output schema, the description provides good context about what happens after invocation (lead routing, referral fees) and clear usage boundaries. It could be more complete by explicitly mentioning response format details or error conditions, but it covers the essential operational context well given the available structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 9 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema properties. The baseline score of 3 reflects adequate coverage through the schema alone, with no additional semantic value from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Record a homeowner's interest in rooftop solar') and resource ('for follow-up with a licensed Texas solar installer'), distinguishing it from sibling tools like 'check_signup_status' or 'compare_providers' which serve different functions. It explicitly mentions the referral fee mechanism, further clarifying its unique purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage conditions: 'Only use this when the user has explicitly opted in and has confirmed they own (or will own) the home.' It also mentions the business context (routing leads to specific installers), which helps differentiate it from alternatives like 'initiate_signup' or 'get_promotions' that might handle different aspects of the solar process.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_signup_statusBInspect
Check the current status of a utility signup previously initiated through Utilify.
| Name | Required | Description | Default |
|---|---|---|---|
| signup_id | Yes | The signup ID returned by initiate_signup |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions checking status but doesn't describe what the status includes (e.g., pending, completed, failed), whether it's a read-only operation, authentication needs, rate limits, or error handling. This leaves significant gaps for a tool that likely interacts with user data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. Every word earns its place, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate as a minimum viable explanation. However, it lacks details on what the status check returns or behavioral traits, which could be important for an AI agent to use it effectively in workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'signup_id' documented as 'The signup ID returned by initiate_signup.' The description doesn't add further meaning beyond this, such as format examples or constraints, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check the current status of a utility signup previously initiated through Utilify.' It specifies the verb ('check') and resource ('utility signup status'), though it doesn't explicitly differentiate from sibling tools like 'initiate_signup' or 'get_move_checklist' beyond mentioning the signup was 'previously initiated.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating it checks 'a utility signup previously initiated,' suggesting it should be used after 'initiate_signup.' However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_move_checklist' or 'compare_providers,' nor does it mention exclusions or prerequisites beyond the signup ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_providersCInspect
Compare two or more utility providers side by side. Returns structured comparison across price, contract terms, features, and ratings.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full street address for plan availability | |
| provider_slugs | Yes | Provider slugs to compare (2-5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a 'structured comparison' but lacks details on format, pagination, error handling, or rate limits. For a tool with no annotations and no output schema, this is insufficient to inform the agent about expected behavior beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('compare two or more utility providers side by side') and specifies the return content. Every word contributes to understanding the tool's purpose without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't cover behavioral aspects like response format, error cases, or dependencies, leaving gaps for the agent. While the purpose is clear, the overall context for safe and effective use is inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (address and provider_slugs). The description adds minimal value beyond the schema by implying the comparison uses the address for plan availability, but it doesn't explain parameter interactions or provide additional context. This meets the baseline of 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: comparing utility providers across specific dimensions (price, contract terms, features, ratings). It uses the verb 'compare' with the resource 'utility providers' and specifies the scope ('side by side'), but it doesn't explicitly differentiate from sibling tools like 'search_utility_providers' or 'get_provider_details', which prevents a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search_utility_providers' or 'get_provider_details'. It mentions comparing 'two or more' providers, but this is implied by the parameter constraints (minItems: 2) rather than explicit usage advice. No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_move_checklistAInspect
Generate a personalized utility setup checklist based on move-in address and date. Tracks what has been set up vs. what still needs attention. Pass tenancy='rent' or tenancy='own' for tenant/owner-specific advisories (e.g., landlord-handled water/trash for renters, solar-interest capture for buyers).
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full move-in address including city, state, and ZIP code | |
| tenancy | No | Whether the user is renting or buying their new place. Shapes the checklist and unlocks homeowner cross-sells. | |
| move_date | No | ISO 8601 date for the planned move-in |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions tracking setup status, which adds some context, but does not cover critical aspects such as whether this tool requires authentication, how the checklist is generated (e.g., based on local utility data), rate limits, or error handling. The description is insufficient for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by additional functionality. Every sentence earns its place by adding value without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a tool that generates personalized checklists. It lacks details on what the output includes (e.g., list items, status tracking format), how personalization works, or any behavioral traits. While concise, it does not compensate for the missing structured data, leaving gaps in understanding the tool's full behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters ('address' and 'move_date'). The description adds no additional meaning beyond what the schema provides, such as explaining how the address influences checklist personalization or the significance of the move date. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Generate a personalized utility setup checklist') and resource ('based on move-in address and date'), with additional functionality ('Tracks what has been set up vs. what still needs attention'). It distinctly differentiates from sibling tools like 'check_signup_status' or 'search_utility_providers' by focusing on checklist generation rather than status checking or provider searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('based on move-in address and date') but does not explicitly state when to use this tool versus alternatives like 'initiate_signup' or 'compare_providers'. It suggests a use case for move-in planning but lacks guidance on prerequisites, exclusions, or specific scenarios where this tool is preferred over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_promotionsAInspect
Get current promotions, deals, coupons, and affiliate offers for utility providers at a given address.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full street address including city, state, and ZIP code | |
| utility_types | No | Filter promotions by utility type | |
| provider_slugs | No | Filter promotions by specific provider slugs |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but does not disclose critical behavioral traits such as whether it requires authentication, rate limits, data freshness, or what the output format looks like (e.g., list of promotions with details). For a tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get current promotions...') and includes essential context ('for utility providers at a given address'). There is no wasted verbiage, and every word contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no annotations and no output schema, the description is incomplete for a tool that likely returns complex promotion data. It adequately covers the purpose and input context but lacks details on behavioral aspects (e.g., authentication, output structure) that are crucial for an AI agent to use it correctly. It meets minimum viability but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description mentions 'at a given address,' which aligns with the required 'address' parameter, but adds no additional semantic context beyond what the schema provides (e.g., format examples or filtering logic). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and resource ('current promotions, deals, coupons, and affiliate offers for utility providers'), and distinguishes itself from siblings like 'search_utility_providers' (which likely lists providers) or 'compare_providers' (which likely compares them). It specifies the scope is for 'utility providers at a given address,' making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when promotions are needed for a specific address, but it does not explicitly state when to use this tool versus alternatives like 'search_utility_providers' or 'compare_providers.' It provides some context (address-based) but lacks clear exclusions or named alternatives, leaving room for ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_provider_detailsBInspect
Get detailed information about a specific utility provider including available plans, pricing, contract terms, and signup requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| zip_code | No | ZIP code to check plan availability and pricing | |
| provider_slug | Yes | The provider slug identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a read operation ('Get'), but doesn't mention potential limitations like rate limits, authentication needs, error conditions, or what happens if the provider_slug is invalid. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and lists key information retrieved. Every word earns its place with no redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose but lacks details on behavioral traits and return values. It's adequate as a minimum viable description for a read tool, but doesn't fully compensate for the missing structured data, leaving room for improvement in transparency.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain format of provider_slug or when zip_code is needed). Baseline 3 is appropriate when the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'detailed information about a specific utility provider', specifying what information is retrieved (plans, pricing, contract terms, signup requirements). It doesn't explicitly differentiate from sibling tools like 'search_utility_providers' or 'compare_providers', which keeps it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'search_utility_providers' (for broader searches) or 'compare_providers' (for comparisons). The description implies usage for a specific provider but doesn't clarify exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
initiate_signupADestructiveInspect
Start the signup/enrollment process with a utility provider. Returns a signup URL, phone number, or begins API enrollment. The user must confirm before calling this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full service address including city, state, and ZIP code | |
| plan_id | Yes | The specific plan to enroll in | |
| session_id | No | Optional session ID for tracking | |
| provider_id | Yes | The provider to sign up with | |
| move_in_date | Yes | ISO 8601 date for desired service start | |
| customer_info | Yes | Customer contact information |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the 'destructiveHint: true' annotation. It specifies the possible return types ('Returns a signup URL, phone number, or begins API enrollment'), which helps the agent understand the outcome. It also mentions the user confirmation requirement, adding operational context. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences. The first sentence states the purpose and return values, while the second provides a critical usage guideline. Every word serves a purpose with no redundancy or fluff, making it front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description provides some context (return types, confirmation requirement) but lacks details on error handling, authentication needs, or what 'begins API enrollment' entails operationally. Given the complexity (6 parameters, nested objects) and the destructive annotation, more behavioral transparency would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what's in the schema. It mentions the tool's purpose but doesn't clarify how parameters like 'provider_id' or 'plan_id' relate to that purpose beyond what the schema descriptions provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Start the signup/enrollment process with a utility provider.' It specifies the action (start signup) and resource (utility provider enrollment). However, it doesn't explicitly differentiate from sibling tools like 'check_signup_status' or 'get_provider_details' beyond the 'start' action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides one usage guideline: 'The user must confirm before calling this tool.' This implies a prerequisite but doesn't specify when to use this tool versus alternatives like 'compare_providers' or 'search_utility_providers' for initial research. The guidance is limited to a single constraint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_utility_providersAInspect
Search for available utility providers (electricity, gas, internet, water, trash, security) at a specific address. Returns providers with basic info, a classified plan type (fixed / free_nights / solar_buyback / 100_renewable / etc.), and whether the cheapest plan is rental-friendly. Pass tenancy='rent' to prefer short-contract plans; pass tenancy='own' to surface solar-buyback options and a solar-interest capture offer.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full street address including city, state, and ZIP code | |
| tenancy | No | Whether the user is renting or buying. Changes plan preferences and enables homeowner-only cross-sells (solar). | |
| utility_types | No | Filter by utility types. If omitted, returns all available types. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the tool's read-only nature (implied by 'Search' and 'Returns'), but lacks details on behavioral traits like rate limits, authentication needs, error handling, or pagination. The description does not contradict annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a concise second sentence explaining the return value. Both sentences earn their place by adding clarity without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately covers the tool's purpose and basic behavior. However, for a search tool with two parameters, it lacks details on output format (beyond 'basic info and availability status'), error scenarios, or data freshness, leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds marginal value by mentioning the address requirement and utility types, but does not provide additional semantics beyond what the schema specifies (e.g., format examples or edge cases).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search for available utility providers') and the resource ('at a specific address'), listing all utility types covered. It distinguishes itself from siblings like 'get_provider_details' or 'compare_providers' by focusing on address-based availability lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'at a specific address' and listing utility types, but does not explicitly state when to use this tool versus alternatives like 'compare_providers' or 'get_provider_details'. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!