mcp-server
Server Details
Data center intelligence: 20,000+ facilities, M&A deals, site scoring, and market analytics.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.3/5 across 20 of 20 tools scored. Lowest: 1.1/5.
Each tool targets a distinct area of data center intelligence (e.g., site analysis, energy, market, infrastructure, transactions), with clear boundaries and no overlapping purposes.
All tool names follow a consistent 'verb_noun' pattern using lowercase and underscores, with verbs like get_, list_, search_, analyze_, compare_ applied uniformly.
20 tools is on the higher side but justified by the broad scope of data center intelligence, covering site, energy, infrastructure, market, and more without feeling redundant.
The set covers most major aspects of data center intelligence (site, market, energy, infrastructure, transactions, news), though minor gaps like environmental impact beyond water could exist.
Available Tools
20 toolsanalyze_siteCInspect
Evaluate location for data center suitability.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | ||
| lon | No | ||
| state | No | ||
| capacity_mw | No | ||
| include_grid | No | ||
| include_risk | No | ||
| include_fiber | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits such as whether the tool performs a read-only analysis, requires specific permissions, or has side effects. The brief description fails to add behavioral context beyond the implied analysis.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (one sentence) but lacks substance. It is under-specified for a tool with 7 parameters, failing to provide necessary detail while still being concise. True conciseness would include essential information only, which is absent here.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, no output schema, no annotations), the description is severely incomplete. It does not explain the analysis results, return format, or how to interpret the output, leaving critical gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 7 parameters with 0% description coverage, and the tool description adds no meaning to any parameter. Parameters like 'lat,' 'lon,' 'state,' 'capacity_mw,' and the boolean flags are left unexplained, forcing the agent to rely solely on parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Evaluate' and the resource 'location for data center suitability,' making the tool's purpose understandable. However, it does not differentiate from sibling tools like 'compare_sites' or other analytical tools, but its focus on suitability evaluation is distinct from the many retrieval-oriented 'get_*' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'get_grid_data' or 'get_fiber_intel.' No exclusions or prerequisites are mentioned, leaving the agent to infer usage from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_sitesCInspect
Compare 2-4 locations side-by-side.
| Name | Required | Description | Default |
|---|---|---|---|
| locations | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, and the description does not disclose any behavioral traits (e.g., mutability, authorization needs). The tool's output format and side effects are entirely unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words, but it is overly minimal. While concise, it omits essential details, making it less effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and only one undocumented parameter, the description fails to provide enough context for effective use. The agent lacks information on input format, output structure, or comparison criteria.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'locations' lacks a schema description (0% coverage), and the description adds no explanation of its expected format (e.g., addresses, IDs, or a list). The agent cannot infer how to invoke the tool correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (compare) and the resource (locations), with a specific range (2-4). This distinguishes it from sibling tools like 'analyze_site' which likely handles single sites.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool over alternatives. For instance, it is unclear whether it should be used instead of fetching individual 'get_*' data for manual comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_registryCInspect
AI platforms connected to DC Hub.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must disclose behavior. It states only what the tool returns ('AI platforms connected to DC Hub') without mentioning side effects, permissions, rate limits, or whether it is read-only. The behavioral impact is largely unaddressed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of a single noun phrase. While it is front-loaded and avoids verbosity, it is slightly too short to be a complete sentence. It effectively communicates the core purpose but could be improved with a verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no parameters, no output schema), the description covers the basic purpose. However, it lacks details about the data returned (e.g., names, IDs, connection details) or any prerequisites. It is minimally complete but leaves room for ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the input schema provides no meaning. The description adds value by specifying the output context (AI platforms connected to DC Hub), which is all the information needed for parameter semantics. It compensates well for the sparse schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'AI platforms connected to DC Hub' gives a general idea but is a noun phrase rather than a clear verb+resource statement. It lacks a verb like 'list' or 'retrieve', making it somewhat vague. It distinguishes from siblings by specifying 'agent registry' but does not elaborate on the nature of the AI platforms.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs. alternatives. With many sibling 'get_*' tools that may return different types of data, the description offers no context for selection or exclusion, leaving the agent to infer usage from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_backup_statusBInspect
Database backup status and data integrity.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose behavioral traits such as being read-only, requiring authentication, or having side effects. Since annotations are absent, the description carries the full burden but only states the nominal subject matter. No safety or mutation information is provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short, consisting of a noun phrase. While there is no wasted text, it is overly terse and not a complete sentence. It is concise but lacks the structure of a clear statement of functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool, the description outlines the general topic but does not specify the format, scope, or time frame of the output. Without an output schema, more detail on return values would improve completeness. It is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and the schema coverage is 100%. The description adds meaning by specifying that the output concerns backup status and data integrity, which is beyond the empty schema. Baseline for no parameters is 4, and the description justifies this score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly indicates the tool provides database backup status and data integrity information. It uses a specific subject matter but lacks a verb, making it slightly less direct. Among siblings, it is distinct in focus but does not differentiate its output from other 'get_*' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, context, or when not to use it. The description does not help an agent decide between this and sibling tools like 'get_facility' or 'get_grid_intelligence'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_dchub_recommendationDInspect
Pre-formatted DC Hub recommendation.
| Name | Required | Description | Default |
|---|---|---|---|
| context | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description fails to disclose any behavioral traits such as side effects, required permissions, or return value characteristics. The agent has no insight into tool behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short but at the expense of informativeness. While it is concise, it is under-specified and does not earn its place because it provides no useful information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 optional param, no output schema, no annotations), the description should at minimum explain the recommendation's source or format. It fails to meet even basic completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'context' is not described in the schema (0% coverage) and the description does not clarify its purpose, type, or format. The description adds no value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Pre-formatted DC Hub recommendation' is a tautology that merely restates the tool name without specifying the verb or resource. It does not clarify what action the tool performs or what a 'DC Hub' refers to, providing no meaningful purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no indication of when to use this tool versus alternatives like 'analyze_site' or 'get_facility'. No context or exclusion criteria are provided, leaving the agent without guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_energy_pricesDInspect
Energy pricing: retail rates, gas, grid status.
| Name | Required | Description | Default |
|---|---|---|---|
| iso | No | ||
| state | No | ||
| data_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description does not disclose behavioral traits such as read-only, authentication requirements, rate limits, or data freshness. The description is too minimal to be useful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is very short (6 words) but fails to convey essential information. This is under-specification, not conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters with no schema descriptions, no output schema, and no annotations, the description is radically incomplete. It does not help the agent understand input-output behavior or how to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% with no parameter descriptions. The description adds no explanation for the 'iso', 'state', or 'data_type' fields, leaving the agent without guidance on valid values or formatting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description lists topics (retail rates, gas, grid status) but lacks a specific verb or resource, making it unclear exactly what the tool does. It does not differentiate from siblings like get_grid_data or get_market_intel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. With many sibling tools providing similar data, this is a critical omission.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_facilityCInspect
Get detailed info about a specific facility.
| Name | Required | Description | Default |
|---|---|---|---|
| facility_id | No | ||
| include_power | No | ||
| include_nearby | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, and the description does not disclose behavioral traits like read-only nature, performance implications, or authentication requirements. It simply states 'get detailed info' without elaborating on side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short, which is efficient but lacks substance. It does not use structural elements like sections or bullet points to aid readability, and the brevity results in under-specification rather than conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three parameters and no output schema, the description is too sparse. It does not explain what 'detailed info' entails, how the parameters affect the response, or what the return value looks like, leaving significant gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has three parameters (facility_id, include_power, include_nearby) but the description provides no explanation for any of them. With 0% schema description coverage, the description fails to clarify their meaning or usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get detailed info about a specific facility,' which clearly identifies the verb (get) and resource (facility). It is distinct from sibling tools like 'search_facilities' which imply broader, non-specific retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool compared to alternatives such as 'analyze_site' or 'get_grid_intelligence'. There is no mention of prerequisites, context, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_fiber_intelDInspect
Dark fiber routes, carrier networks, connectivity.
| Name | Required | Description | Default |
|---|---|---|---|
| carrier | No | ||
| route_type | No | ||
| include_sources | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description fails to disclose any behavioral traits such as side effects, idempotency, authorization needs, or rate limits. The description gives zero behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short, but it is underspecified rather than concise. It lacks sufficient detail to be useful, so it does not achieve effective conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has three parameters, no output schema, and no annotations, the description is grossly incomplete. It provides no meaningful information for an AI agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain any of the three parameters (carrier, route_type, include_sources). The agent cannot infer what values to provide or their effects.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Dark fiber routes, carrier networks, connectivity.' lists topics but lacks a clear verb or action. It does not specify what the tool does (e.g., retrieve, list, search) and does not distinguish it from sibling tools like get_facility or get_grid_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. alternatives. The description does not mention context, prerequisites, or scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_grid_dataCInspect
Real-time electricity grid data for US ISOs.
| Name | Required | Description | Default |
|---|---|---|---|
| iso | No | ||
| metric | No | ||
| period | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavioral traits. It only states the tool returns real-time data, but does not mention authentication, rate limits, data freshness, error handling, or any side effects. This is insufficient for safe and correct invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise, but it is under-specified to the point of being unhelpful. It front-loads 'Real-time electricity grid data' but fails to include essential details, making it incomplete rather than efficiently compact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three undocumented parameters, no annotations, no output schema, and many sibling tools, the description is critically incomplete. It does not explain parameter usage, distinguish from similar tools, or describe the response format. The agent cannot reliably use this tool based on the description alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has three parameters (iso, metric, period) with 0% description coverage. The description adds no meaning about valid values, formats, or ranges for these parameters. An AI agent has no idea what to provide for iso (e.g., 'PJM', 'CAISO') or metric (e.g., 'load', 'generation').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Real-time electricity grid data for US ISOs' indicates the tool returns grid data for US ISOs, but it lacks specificity on what metrics or actions are involved. It is somewhat vague and does not clearly differentiate from siblings like get_energy_prices or get_grid_intelligence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidance is provided. There is no indication of when to use this tool versus alternatives, no prerequisites, and no exclusions. The description alone does not help an agent decide when this tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_grid_intelligenceCInspect
Grid intelligence brief for a US ISO region.
| Name | Required | Description | Default |
|---|---|---|---|
| region_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits, but it only says 'brief' without confirming read-only nature, authorization needs, rate limits, or output format. The agent gets no safety or side-effect information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, which is concise but lacks structure. It fronts the key purpose, but its brevity sacrifices necessary detail, making it borderline under-specified rather than efficiently concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and no output schema, the description is incomplete: it does not explain return value (what is in the brief), nor does it cover parameter semantics. The agent lacks sufficient context to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'region_id' with no description coverage (0%). The description does not explain what values are valid (e.g., ISO codes like 'PJM'), expected format, or how to specify the region. The agent cannot infer correct input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a 'grid intelligence brief' for a US ISO region, indicating a specific verb (retrieve) and resource (intelligence brief). However, it does not differentiate from sibling tools like 'get_intelligence_index' or 'get_grid_data', missing an opportunity to clarify uniqueness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_grid_data' or 'get_market_intel'. No context about prerequisites or exclusions is given, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_infrastructureCInspect
Nearby substations, transmission lines, gas pipelines, power plants.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | ||
| lon | No | ||
| layer | No | ||
| limit | No | ||
| radius_km | No | ||
| min_voltage_kv | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but fails to disclose behavioral traits. It does not indicate read-only status, response format, side effects, authentication needs, or rate limits. Simply stating what is returned is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (a single phrase), but it lacks structure and essential information. Conciseness here sacrifices completeness, making it insufficient for effective tool use.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no output schema, and no annotations, the description is critically incomplete. The agent cannot determine how to use the tool, what the output looks like, or what constraints exist. A complete description would require much more detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not explain any parameters. Schema coverage is 0%, and the agent cannot infer meaning for 'layer', 'limit', 'min_voltage_kv' from the description. Even 'lat', 'lon', and 'radius_km' are only implicit from 'nearby', but not explained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description lists specific resource types (substations, transmission lines, gas pipelines, power plants), making the tool's purpose clear. It differentiates from sibling tools like get_pipeline or get_grid_data by focusing on nearby infrastructure. However, it lacks a verb, which slightly reduces clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., get_pipeline for a specific pipeline). The description does not specify context or conditions for use, leaving the agent without decision support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_intelligence_indexBInspect
Real-time composite market health score.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It states 'real-time' but does not explain update frequency, data sources, aggregation method, or any side effects. The tool appears read-only but lacks explicit confirmation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no superfluous words. Front-loads the key purpose and quality ('real-time composite'). Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description is minimally adequate. It tells what the tool returns but omits details like score range, interpretation, or how it relates to other intelligence tools. Could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage. The description adds meaning beyond the schema by explaining the value is a 'composite market health score' computed in real-time, which is useful context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Real-time composite market health score' clearly specifies the tool returns a single score representing market health. 'Composite' and 'market health' help differentiate from siblings like get_market_intel or get_grid_intelligence, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., get_market_intel). No mention of prerequisites or usage context, leaving the agent to infer applicability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_intelCInspect
Get market intelligence: supply/demand, pricing, vacancy.
| Name | Required | Description | Default |
|---|---|---|---|
| market | No | ||
| metric | No | ||
| period | No | ||
| compare_to | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must disclose behavioral traits. It only says 'Get market intelligence' without mentioning authentication, rate limits, whether it is read-only, or the nature of the data (historical/forecast). Minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (8 words) but under-specified. It provides a vague purpose without earning its place through additional useful information. Adequate in length but lacking substance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no annotations, and no output schema, the description is incomplete. It fails to explain parameters, return values, or any context needed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 4 parameters with 0% description coverage. The description does not add any meaning to the parameters (market, metric, period, compare_to). No compensation for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool gets market intelligence and lists specific aspects (supply/demand, pricing, vacancy). It is clear about the resource but does not differentiate from sibling tools like get_energy_prices or get_grid_data, which may overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. No context provided about prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_newsCInspect
Curated data center industry news from 40+ sources.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | ||
| source | No | ||
| date_to | No | ||
| category | No | ||
| date_from | No | ||
| min_relevance | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description only mentions 'curated' and '40+ sources' but fails to disclose important behavioral traits such as pagination limits, date range constraints, or source granularity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but not overly concise—it lacks structure and detail. A single sentence suffices for some tools, but here it fails to provide necessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters and no annotations or output schema, the description is severely incomplete. It does not help the agent understand how to effectively call the tool or interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no meaning to the 7 input parameters. With 0% schema description coverage, the agent has no clues about what 'query', 'source', 'category', etc., represent or how they behave.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides curated data center industry news from multiple sources, differentiating it from other get_* tools focused on facilities, energy, etc. However, it could be more specific about the curation scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Lacks context for filtering or querying, leaving the agent to infer usage from the tool name and parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pipelineCInspect
Track 540+ projects, 369 GW construction pipeline.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| status | No | ||
| country | No | ||
| operator | No | ||
| min_capacity_mw | No | ||
| expected_completion_before | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description only says 'Track,' which implies a read operation but is not explicit. It does not disclose any other behavioral traits such as idempotency, rate limits, or effects on data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, making it concise, but it sacrifices clarity. It is front-loaded but lacks substance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters, no output schema, and no annotations, the description is extremely incomplete. It fails to explain the return value, filtering, pagination, or any other essential context for an AI agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 7 parameters with 0% description coverage, and the tool description adds no meaning to these parameters. The agent receives no guidance on what each parameter does, making it difficult to use correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Track 540+ projects, 369 GW construction pipeline,' which gives a general idea of the resource but does not clearly specify that the tool retrieves a list or report. It is vague and does not differentiate from sibling tools like get_facility or search_facilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description does not mention any context, prerequisites, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_renewable_energyDInspect
Renewable energy: solar, wind, combined capacity.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | ||
| lon | No | ||
| state | No | ||
| energy_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description does not disclose behavioral traits such as rate limits, authentication needs, or data freshness. It only implies a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (5 words), but at the cost of clarity. Lacks structure and does not front-load key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no output schema, and no annotations, the description is grossly inadequate. It does not explain what the tool returns or how to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain any of the four parameters (lat, lon, state, energy_type). The mention of 'solar, wind, combined capacity' hints at energy_type but is not explicit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Renewable energy: solar, wind, combined capacity' is vague and fails to specify a clear verb+resource. It does not distinguish the tool from siblings like 'get_grid_data' or 'get_energy_prices'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives. The description does not mention any preconditions or context for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tax_incentivesBInspect
Data center tax incentives by US state.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral burden. It implies a read operation but does not explicitly state side effects, authentication needs, rate limits, or what happens if no state is provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence. It could be slightly more structured (e.g., 'Returns a list of tax incentives for a given US state'), but it is not verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool, the description covers the basic purpose, but lacks details on output format, error handling, and what happens when no state is provided. The absence of output schema shifts burden to description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The parameter 'state' has no schema description (0% coverage). The description adds that it is a US state, but does not specify format (e.g., abbreviation vs full name) or whether it is required (schema shows not required).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns data center tax incentives keyed by US state, using the verb 'get' implied by the name. It distinguishes itself from sibling tools like get_energy_prices (energy) or get_grid_data (grid).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., get_market_intel, get_infrastructure). There are no usage conditions, prerequisites, or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_water_riskCInspect
Water stress and drought risk for a location.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | ||
| lon | No | ||
| state | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only states the output concept without mentioning read-only nature, authentication needs, rate limits, or return format. Minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (one sentence), which is appropriate for a simple tool. However, it could be slightly more structured or elaborate. No wasted words, but brevity reduces clarity in other dimensions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no output schema, and no annotations, the description is incomplete. It lacks information about required parameters, default behavior, or what the response looks like. More context is needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description adds no information about parameters. It does not explain what lat, lon, or state mean, how they are used, or any constraints. The agent must rely solely on parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides water stress and drought risk for a location, which differentiates it from siblings like get_energy_prices or get_grid_data. However, it doesn't specify how the location is provided (lat/lon/state), relying on the schema. High-level purpose is clear but could be more explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. There is no mention of contexts, prerequisites, or exclusions. The agent must infer usage from the name and schema alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_transactionsCInspect
M&A transactions — $324B+ tracked.
| Name | Required | Description | Default |
|---|---|---|---|
| buyer | No | ||
| limit | No | ||
| offset | No | ||
| region | No | ||
| seller | No | ||
| date_to | No | ||
| date_from | No | ||
| deal_type | No | ||
| max_value_usd | No | ||
| min_value_usd | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior but only mentions tracked value. It does not reveal that the tool likely performs read-only listing, includes pagination (limit/offset), or any authorization or rate-limit requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is extremely concise, front-loading the domain. However, it sacrifices necessary detail; still, it is efficient in length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, no annotations, no output schema), the description is insufficient. It omits information about filtering behavior, pagination, output format, and any business context beyond the total tracked value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, and the description adds no explanation for any of the 10 parameters. Parameter names (e.g., buyer, region, date_from) are somewhat self-explanatory, but no extra meaning or constraints are provided beyond basic types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'M&A transactions — $324B+ tracked.' indicates the domain but does not explicitly state the listing action or how it differs from sibling tools like 'analyze_site' or 'get_market_intel'. The tool name implies listing, but the description adds little specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor are any exclusion conditions or best practices mentioned. The description offers no context for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_facilitiesCInspect
Search 20,000+ global data center facilities.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | ||
| tier | No | ||
| limit | No | ||
| query | No | ||
| state | No | ||
| offset | No | ||
| country | No | ||
| operator | No | ||
| max_capacity_mw | No | ||
| min_capacity_mw | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must disclose behavior. It only says 'Search' without indicating whether it is read-only, what data is returned, pagination, or any constraints. Minimal behavioral context is added.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, so it is concise. However, it is too terse to be useful; additional key details (e.g., supported filters, result set) could be added without excessive length. Conciseness is not an asset when critical information is missing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters, no output schema, and many sibling tools, the description is severely incomplete. It fails to explain result structure, parameter usage, or differentiation, making it insufficient for the agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides no information about any of the 10 parameters. Without details, the agent cannot understand how to use query, city, tier, or other fields effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it searches data center facilities, which is clear. However, it does not distinguish from sibling tools like get_facility (which likely returns a single facility) or analyze_site. The verb 'Search' is appropriate but lacks specificity on what is searched or returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings such as get_facility, analyze_site, or compare_sites. The agent is left to infer usage context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!