Skip to main content
Glama

Clara — Personal Clean Air Planner

Server Details

UK personal air quality advice and daily exposure assessment. Pairs with Hermes for live data.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 2 of 2 tools scored.

Server CoherenceA
Disambiguation5/5

The two tools have clearly distinct purposes: contextual_advice provides personalized air quality advice based on context, while exposure_assessment estimates daily pollution exposure using time-weighted modelling. No overlap in functionality.

Naming Consistency5/5

Both tool names follow a consistent noun_noun pattern with underscores: contextual_advice and exposure_assessment. The naming is predictable and clear.

Tool Count3/5

With only two tools, the server feels somewhat sparse for a 'Personal Clean Air Planner'. While the tools are well-scoped, additional tools (e.g., for historical data or indoor air quality tips) could enhance utility without bloating.

Completeness4/5

The tool set covers personalized advice and exposure assessment, which are core to the server's purpose. However, it lacks a tool for fetching raw air quality data independently (relying on another server), and could benefit from a tool to manage user preferences or settings.

Available Tools

2 tools
contextual_adviceAInspect

Personalised air quality advice for a UK location and a specific user context.

Use this tool whenever the user asks an air-quality question that depends on who they are or what they're about to do: e.g. asthma, pregnancy, school-age child, gas cooker at home, tube commute, outdoor exercise. It composes location-specific pollution with the user's personal context to produce evidence-based advice — far more targeted than a generic "high pollution day" handout.

Composable with Hermes: pass pm25/no2 from Hermes get_current_aq for advice based on live readings rather than annual average estimates.

Returns structured advice with a plain-English summary, health context, and local intervention information. Present the 'summary' to users first.

Args: postcode: UK postcode (e.g. "SE17 1RL"). Provides coords + LAEI pollution. latitude: Latitude for coordinate-based lookup. longitude: Longitude for coordinate-based lookup. pm25: PM2.5 concentration in ug/m3. Overrides location-based estimate. no2: NO2 concentration in ug/m3. Overrides location-based estimate. setting: Context — residential, school, workplace, outdoor_exercise, commute. has_gas_cooker: Whether the person has a gas cooker (affects indoor advice). commute_mode: If setting is commute — walk, cycle, bus, car, train, tube. has_indoor_sources: Indoor pollution sources (smoking, woodstove). audience: Target audience — general, children, elderly, respiratory, pregnant.

ParametersJSON Schema
NameRequiredDescriptionDefault
no2No
pm25No
settingNoresidential
audienceNogeneral
latitudeNo
postcodeNo
longitudeNo
commute_modeNo
has_gas_cookerNo
has_indoor_sourcesNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns structured advice with a summary, health context, and intervention info. However, it omits behavioral details such as error handling (e.g., invalid postcode), dependencies (requires either postcode or lat/lng), and any potential side effects or limitations (e.g., UK-only).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear paragraphs: purpose, usage guidelines, composability, return value, and parameter list. It is front-loaded with the main purpose. While it is somewhat lengthy, each sentence adds value, and the structure aids readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description is fairly complete: it explains the tool's purpose, when to use, all parameters, and the structure of the response. It also mentions composability with another tool. Minor gaps include missing error behavior and explicit differentiation from the sibling tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by explaining each parameter: e.g., 'pm25: PM2.5 concentration in ug/m3. Overrides location-based estimate.' It also lists allowed values for 'setting', 'audience', and 'commute_mode'. This adds significant meaning beyond the raw schema, though it does not explain defaults or constraints like mutual exclusivity of location parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's function: providing personalized air quality advice for a UK location and user context. It specifies numerous concrete scenarios (asthma, pregnancy, school-age child, etc.) and contrasts with generic handouts, effectively distinguishing it from the sibling tool 'exposure_assessment'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool (when the question depends on who the user is or what they are doing) and gives examples. It also mentions composability with Hermes for live readings, providing integration context. However, it does not explicitly state when not to use it or how it differs from 'exposure_assessment' beyond the purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

exposure_assessmentAInspect

Estimate personal daily air pollution exposure across home, work, and commute.

Uses time-weighted modelling across environments: home (with indoor source adjustments), work, and commute (with route-based pollution and transport mode factors). Based on annual average pollution estimates, not live readings.

Args: home_postcode: Home location UK postcode (e.g. "SE17 1RL"). Required. work_postcode: Work/school location postcode. Omit if not commuting. transport_mode: Commute mode — walk, cycle, bus, car, train, tube. work_frequency: How often you commute — most_days, some_days, less_often, never. commute_hour: Hour of commute (0-23) for time-of-day pollution adjustment. cooker_type: Home cooker type — gas, electric, induction, unknown. smoking_at_home: Whether anyone smokes indoors (major PM2.5 source). tube_line: London Underground line for tube commuters (e.g. "victoria").

ParametersJSON Schema
NameRequiredDescriptionDefault
tube_lineNo
cooker_typeNounknown
commute_hourNo
home_postcodeYes
work_postcodeNo
transport_modeNowalk
work_frequencyNomost_days
smoking_at_homeNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the data source (annual averages, not live) and mentions smoking as a major PM2.5 source. It does not detail output format or authentication needs, but provides adequate transparency for a read-only estimation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise: a clear overview sentence followed by a bullet list of parameters with explanations. Every sentence is informative, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description adequately explains the model and inputs. It does not describe the return format (e.g., numeric value, unit) or error conditions, but overall is complete for an estimation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description provides detailed explanations for all 8 parameters, including example values, defaults, and context (e.g., home postcode required, smoking as major PM2.5 source). This adds significant value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool estimates personal daily air pollution exposure across home, work, and commute using time-weighted modelling. The verb 'estimate' and resource 'personal daily air pollution exposure' are specific and distinct from the sibling tool 'contextual_advice'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the tool is based on annual average pollution estimates, not live readings, implying it is for long-term exposure assessment. However, it does not explicitly state when to use this tool vs alternatives or provide exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources