sustainability-auditor
Server Details
Website carbon footprint auditor. CO2/page, grade A–F, green hosting check, and recommendations.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: audit_website performs a new audit, get_benchmark_stats provides aggregated statistics, get_domain_history shows historical trends, and register_api_key handles API registration. The descriptions clearly differentiate their functions, eliminating any ambiguity.
All tool names follow a consistent verb_noun pattern (audit_website, get_benchmark_stats, get_domain_history, register_api_key), using snake_case throughout. The verbs (audit, get, register) are appropriate and predictable, making the set easy to navigate.
With 4 tools, this server is well-scoped for its sustainability auditing domain. Each tool serves a distinct and necessary function—auditing, benchmarking, historical analysis, and API setup—without being overly sparse or bloated, fitting typical use cases efficiently.
The tool set covers core workflows: performing audits, accessing benchmarks, and reviewing history, with API registration for access. A minor gap exists in lacking update or delete operations for audits or keys, but agents can work around this as the domain focuses on read-only data analysis and setup.
Available Tools
4 toolsaudit_websiteAInspect
Audit a website for its digital carbon footprint.
Returns sustainability score (A-F), CO2 grams per page view,
green hosting status, page weight, and recommendations.
Results cached 24h. New audits take ~45-60 seconds.
Data source: ClimateUX (climateux.net).| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it returns specific metrics (score, CO2, hosting status, etc.), mentions caching ('Results cached 24h'), performance ('~45-60 seconds'), and data source ('ClimateUX'). This covers operational aspects like timing and data freshness beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. Each sentence adds value: the first states what it does, the second lists outputs, and the third covers behavioral details (caching, timing, source). There is no wasted text, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (involving external data and performance considerations) and the presence of an output schema (which handles return values), the description is complete. It covers purpose, outputs, caching, timing, and data source, addressing key contextual needs without redundancy. No annotations are provided, but the description fills the gap adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the description must compensate. It implies the parameter is a website URL by context ('Audit a website'), but does not explicitly define 'url' or add format details (e.g., requiring HTTP/HTTPS). Since there's only one parameter, the baseline is high, but some semantic clarification is missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Audit') and resource ('a website'), specifying what it measures ('digital carbon footprint'). It distinguishes itself from sibling tools like 'get_benchmark_stats' or 'get_domain_history' by focusing on comprehensive sustainability assessment rather than historical data or statistics retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the mention of 'digital carbon footprint' and 'sustainability score', suggesting it's for environmental impact analysis. However, it does not explicitly state when to use this tool versus alternatives like 'get_benchmark_stats' (which might provide related metrics) or provide exclusions (e.g., not for non-web resources).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_benchmark_statsAInspect
Get aggregate benchmark statistics from ClimateUX's database of 500+ audited websites.
Includes average CO2 per page view, average sustainability score, green hosting rate.
No API key required. Data source: ClimateUX (climateux.net).| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context by specifying the data source ('ClimateUX'), scope ('500+ audited websites'), and key metrics included ('average CO2 per page view, average sustainability score, green hosting rate'), though it lacks details on rate limits, response format, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific metrics and operational details. Every sentence adds essential information (e.g., data source, no API key requirement) with zero waste, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no annotations, but with an output schema), the description is mostly complete. It covers purpose, data source, metrics, and access requirements, though it could benefit from mentioning the output format or data update frequency to fully compensate for the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately adds value by explaining what data is retrieved (e.g., specific metrics like CO2 per page view) without redundant parameter details, earning a baseline 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get aggregate benchmark statistics') and resource ('from ClimateUX's database of 500+ audited websites'), distinguishing it from sibling tools like 'audit_website' or 'get_domain_history' by focusing on aggregated statistics rather than individual audits or historical data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage ('No API key required') and implies this is for accessing pre-aggregated benchmark data, but does not explicitly state when to use this tool versus alternatives like 'get_domain_history' or 'audit_website', nor does it mention any exclusions or prerequisites beyond the data source.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_domain_historyAInspect
Get historical carbon audit data and trend for a domain.
Returns audits ordered newest first, plus trend: improving / stable / declining.
Data source: ClimateUX (climateux.net).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| domain | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it returns ordered audits (newest first) and a trend classification, and specifies the data source. However, it lacks details on permissions, rate limits, error handling, or pagination (despite a 'limit' parameter), which are important for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by return details and data source. All three sentences add value: the first states the action, the second specifies output ordering and trend, and the third identifies the source. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, but has output schema), the description is reasonably complete. It covers purpose, output behavior, and data source. The output schema likely details return values, so the description need not explain them. However, it could better address parameter usage and behavioral constraints like error cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains that 'domain' is for retrieving historical data, adding context beyond the schema's generic 'Domain' title. However, it does not clarify the 'limit' parameter's purpose or default behavior. With 2 parameters and partial semantic enhancement, this scores above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get historical carbon audit data and trend'), identifies the resource ('for a domain'), and distinguishes from siblings by focusing on historical data rather than current audits (audit_website), benchmarks (get_benchmark_stats), or configuration (register_api_key). It includes the data source, which further clarifies scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for historical data retrieval but does not explicitly state when to use this tool versus alternatives like audit_website (which likely provides current audits) or get_benchmark_stats. No exclusions or prerequisites are mentioned, leaving some ambiguity about tool selection in context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_api_keyAInspect
Register for a free ClimateUX API key (10 audits/month, no credit card). Idempotent — same email always returns the same key.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well: it discloses the free tier limitations (10 audits/month, no credit card), idempotent behavior (same email returns same key), and implies this is a registration/write operation. It doesn't mention rate limits, authentication needs, or error conditions, but provides solid behavioral context for a registration tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two sentences) with zero waste. The first sentence states purpose and key constraints, the second explains idempotency. Every word earns its place, and information is front-loaded appropriately for a simple registration tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a registration tool with 2 parameters, 0% schema coverage, no annotations, but with an output schema (which handles return values), the description provides good context. It covers purpose, limitations, and idempotency. For a simple registration endpoint, this is reasonably complete, though it could benefit from mentioning what the API key enables or error cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'email' as the key parameter for registration and idempotency, which aligns with the required 'email' parameter in the schema. However, it doesn't explain the optional 'name' parameter or provide format/validation details for either parameter. The description adds some meaning but doesn't fully compensate for the schema coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Register for a free ClimateUX API key' with specific details about the key's limitations (10 audits/month, no credit card). It distinguishes from siblings like 'audit_website' or 'get_benchmark_stats' by focusing on account registration rather than data retrieval. However, it doesn't explicitly contrast with siblings beyond the different domain of operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context: 'Register for a free ClimateUX API key' suggests this should be used when a user needs an API key for the service. It mentions idempotency which helps understand when repeated calls are safe. However, there's no explicit guidance on when to use this versus alternatives (though siblings are unrelated) or any prerequisites beyond providing an email.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!