Bluewatch
Server Details
Search live UK firefighter recruitment across 73 fire services, with on-call station data.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 7 of 7 tools scored.
Each tool has a clearly distinct purpose: get vs list for guides, services, and opportunities; search_opportunities and nearby_stations are unique. No overlapping functionality.
All tool names follow a consistent verb_noun pattern with underscores (e.g., get_guide, list_services, nearby_stations), all lowercase.
7 tools is appropriate for a recruitment data API covering guides, services, opportunities, and stations. Not too few or too many.
Core read operations are covered; missing a list_stations tool to browse stations by service, but nearby_stations and get_service mitigate this. No dead ends for the intended use case.
Available Tools
7 toolsget_guideAInspect
Get the full body of a Bluewatch guide by slug. Bodies are MDX/Markdown; pass them through to your reader.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The guide slug. Use list_guides to discover slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It implies read-only operation and specifies output format, but lacks details on auth, rate limits, or side effects. Adequate for a simple get.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences with no wasted words. Front-loaded with key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers most aspects: purpose, calling method, output format. Lacks some usage guidance relative to siblings, but for a simple tool with good schema, it's nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so schema already documents slug. Description adds no extra param info beyond reinforcing purpose. Baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'get', resource 'guide', and method 'by slug'. Distinguishes from siblings like list_guides and get_opportunity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tells agent to pass the MDX/Markdown output through to reader, but does not explicitly contrast with siblings or provide when/when-not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_opportunityAInspect
Get full detail for a single live opportunity by slug (e.g. "wholetime-firefighter-trainee" or "on-call-firefighter-axminster"). Opportunity slugs are globally unique; use search_opportunities to discover them.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The opportunity slug. Globally unique. Use search_opportunities to discover slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral transparency. The verb 'Get' suggests a non-destructive read operation, which is reasonable. However, it does not explicitly state read-only behavior or any side effects, leaving some ambiguity. Given the simplicity, it is minimally adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loads the core action, and every sentence contributes essential information. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description covers the essential use case: retrieving a specific opportunity by slug with guidance on discovery. It does not describe the output format, but given the simplicity and sibling tools, this is acceptable but could be enhanced with an example return.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by providing concrete examples of slug formats (e.g., 'wholetime-firefighter-trainee') and reinforcing that slugs are globally unique, which aids the agent in understanding parameter usage beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get full detail for a single live opportunity by slug', specifying the verb, resource, and identifier. It provides concrete examples of slugs and explicitly differentiates from search_opportunities, making the tool's purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises using search_opportunities to discover slugs, giving clear context for when to use this tool. It implies the tool is for retrieving details of a specific opportunity, distinct from searching. However, it does not explicitly contrast with other get tools like get_guide or get_service, though those are different resource types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_serviceAInspect
Get full metadata for a single fire service by slug (e.g. "london-fire-brigade"). Returns the service record plus its current live opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | The service slug. Use list_services to discover slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return value ('service record plus live opportunities') and implies read-only via 'Get', but does not mention authentication, rate limits, side effects, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and resource, no fluff. Every sentence adds value: first defines purpose, second specifies outputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter and no output schema, the description fully informs the agent of input, output, and how to discover slugs. No gaps given complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with a clear description for 'slug'. The description adds an example and a hint to use 'list_services', but this is incremental. With full schema coverage, baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'Get', resource 'full metadata for a single fire service', and identifier 'by slug' with an example. It distinguishes from sibling 'list_services' by specifying 'single' and states additional return value 'current live opportunities'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. The description implies using this after 'list_services' to get details on a specific service, but does not contrast with other siblings like 'search_opportunities' or 'nearby_stations'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_guidesAInspect
List all editorial guides Bluewatch publishes (application process, fitness test, on-call vs wholetime, etc.). Returns slugs you can pass to get_guide.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavior. Lists guides with no parameters, but lacks details on pagination, permissions, or side effects. Adequate for a simple read-only list.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with purpose and usage. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a no-parameter list tool without output schema, description covers purpose, content type, and relationship to sibling tool (get_guide). Fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters (0 params, 100% schema coverage). Description adds context by specifying what is listed (editorial guides) and return value (slugs), beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'List all editorial guides Bluewatch publishes' and mentions 'Returns slugs you can pass to get_guide', clearly distinguishing from siblings like get_guide.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly indicates usage: list guides to obtain slugs for get_guide. Clear context but no explicit when-not-to-use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_servicesAInspect
List UK fire and rescue services and other firefighter employers Bluewatch tracks. Filter by category or region. Returns slugs you can pass to get_service or search_opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return (1–50). Defaults to 20. | |
| offset | No | Result offset. Defaults to 0. | |
| region | No | UK region slug. | |
| service_category | No | Service category. One of: local_authority_frs, airport_arff, defence_fire, industrial, private_contractor, overseas_transfer, other. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description indicates read-only list operation and what is returned. Could explicitly state it is non-destructive, but 'List' implies safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff. First states purpose, second adds value about return value. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 4 parameters well-described in schema and no output schema, description explains how output connects to other tools. Lacks details on default behavior (e.g., all categories if not specified) but still adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. Tool description adds grouping ('filter by category or region') and explains output usage (slugs for other tools).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it lists UK fire services with filtering options. Distinguishes from siblings like 'get_service' (specific service) and 'search_opportunities' (opportunities).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States when to use (to list services) and mentions return slugs for use with other tools. Lacks explicit exclusion or when-not-to-use context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nearby_stationsAInspect
Find UK fire stations near a postcode, ranked by drive time via OSRM. Returns each station with its service, drive duration and distance, on-call signal, and recruitment status. Use this to answer "which stations are within 5 minutes of postcode X" — the canonical question for on-call eligibility, since on-call crew must reach the station within 5 minutes of being alerted.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max stations to return (1–25). Defaults to 10. | |
| postcode | Yes | UK postcode, e.g. "SW1A 1AA" or "M1 1AE". Whitespace and case are normalised. | |
| max_drive_seconds | No | Cap drive duration in seconds. Defaults to 420 (7 minutes). On-call services typically require 300 (5 minutes). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It mentions ranking via OSRM and return fields, but does not disclose read-only status, rate limits, or other behavioral traits. Adequate but could be more explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is brief, front-loaded with purpose, then return fields, then usage guidance. Every sentence adds value, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with no output schema, description explains return fields and typical usage. Complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions. Description adds value by explaining the 5-minute rule and default durations, enhancing schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Find', resource 'UK fire stations', ranking by drive time via OSRM, and lists return fields. It distinguishes from sibling tools which are about guides, opportunities, and services.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the use case: 'which stations are within 5 minutes of postcode X' for on-call eligibility. Does not mention alternatives or exclusions, but siblings are non-competing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_opportunitiesAInspect
Search live UK firefighter recruitment opportunities. Filter by role type, service category, region, or service slug. Results are live data refreshed every 6 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return (1–50). Defaults to 20. | |
| since | No | ISO 8601 timestamp. Only return opportunities first seen since this point. | |
| offset | No | Result offset for pagination. Defaults to 0. | |
| region | No | UK region slug, e.g. "london", "north-west", "scotland", "wales", "northern-ireland". | |
| role_type | No | Operational role type. One of: wholetime, on_call, transferee, control, specialist, officer, cadet, volunteer, operational_hq. | |
| service_slug | No | Restrict to a single service (e.g. "london-fire-brigade"). Use list_services to discover slugs. | |
| service_category | No | Service category. One of: local_authority_frs, airport_arff, defence_fire, industrial, private_contractor, overseas_transfer, other. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description provides some behavioral context: data is 'live' but 'refreshed every 6 hours.' This adds value beyond the schema but does not disclose mutation implications or whether the operation is read-only, which is acceptable for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose, second adds filter options and data freshness. No unnecessary words, well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers core functionality and data freshness. It does not describe the return format (list of opportunities), but given the tool name and no output schema, it is sufficient for an agent to infer the output. Slightly incomplete without mentioning pagination implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers 100% of parameters with descriptions. The description repeats filter fields (role type, etc.) but adds no new semantics beyond what the schema already provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search live UK firefighter recruitment opportunities,' which is a specific verb-resource pair. It distinguishes from sibling tools like get_opportunity (single) and list_services (services list) by focusing on searching opportunities with filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists filter dimensions (role type, service category, region, service slug) and mentions data freshness, indicating when to use (searching/filtering opportunities). It does not explicitly state when not to use or name alternatives, but the purpose is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!