Skip to main content
Glama

cancer-support-hub

Server Details

Search 585+ free cancer support resources across Washington State in 7 languages

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: get_resource retrieves a single resource by ID, list_categories provides category metadata, list_counties lists geographic data, and search_resources performs filtered searches. The descriptions explicitly differentiate their functions, eliminating any ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case (e.g., get_resource, list_categories, search_resources). This predictable naming convention makes the tool set easy to navigate and understand at a glance.

Tool Count5/5

With 4 tools, this server is well-scoped for its purpose of providing cancer support resource information in Washington State. Each tool serves a specific, necessary function (retrieval, listing, and searching), and the count avoids bloat while covering essential operations.

Completeness4/5

The tool set covers core operations for exploring and accessing cancer support resources, including listing categories/counties, searching with filters, and getting detailed resource data. A minor gap exists in lacking update or management tools, but this is reasonable for a read-only informational server focused on resource discovery.

Available Tools

4 tools
get_resourceAInspect

Get full details for a single cancer support resource by ID, including locations, hours, and match criteria.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesResource ID
languageNoLanguage for returned text. Default: en
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the tool as a read operation ('Get full details'), which is clear, but lacks additional behavioral context such as error handling (e.g., what happens if the ID is invalid), authentication needs, or rate limits. The description does not contradict any annotations, but it could be more informative.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, and key details (locations, hours, match criteria). It is front-loaded with the main action and includes no unnecessary information, making it highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple retrieval with 2 parameters), no annotations, and no output schema, the description is adequate but has gaps. It covers the purpose and scope well but lacks details on behavior (e.g., error cases) and output format, which would be helpful for an agent to use it correctly without structured output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('id' and 'language') with descriptions and enum values. The description adds no specific parameter details beyond what the schema provides, such as format examples for 'id' or usage notes for 'language', resulting in a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full details'), the resource ('a single cancer support resource'), and the scope ('by ID, including locations, hours, and match criteria'). It distinguishes from siblings like 'list_categories', 'list_counties', and 'search_resources' by focusing on detailed retrieval of a single item rather than listing or searching multiple items.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'by ID', suggesting this tool is for retrieving details when a specific resource ID is known. However, it does not explicitly state when to use alternatives like 'search_resources' (e.g., when ID is unknown) or provide exclusions, which keeps it from a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List all 11 support categories with names, descriptions, and resource counts. Use this to understand what types of resources are available before searching.

ParametersJSON Schema
NameRequiredDescriptionDefault
languageNoLanguage for category names. Default: en
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool lists categories with specific attributes, which is helpful, but lacks details on potential limitations like rate limits, authentication needs, or error handling. The description doesn't contradict any annotations, but it could provide more behavioral context given the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and follows with a usage guideline, making it efficient and easy to parse. Both sentences earn their place by providing essential information without redundancy, resulting in a highly concise and well-structured text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is mostly complete. It explains the purpose, usage, and output details. However, it could improve by mentioning the fixed count of 11 categories or potential behavioral traits, but overall, it's sufficient for a simple list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'language' parameter well-documented in the schema. The description doesn't add any parameter-specific information beyond what the schema provides, such as explaining the impact of language choice on category names. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate with extra semantic details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all 11 support categories'), the resource ('support categories'), and the output details ('with names, descriptions, and resource counts'). It distinguishes from sibling tools like 'get_resource', 'list_counties', and 'search_resources' by focusing on categories rather than individual resources, counties, or search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Use this to understand what types of resources are available before searching'), which implies it's for initial exploration. However, it doesn't explicitly state when not to use it or name alternatives among sibling tools, such as using 'search_resources' for specific queries instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_countiesAInspect

List all 39 Washington State counties. Use this to find the correct county name for filtering search results.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies this is a read-only operation (listing) but doesn't specify behavioral details like response format, ordering, or potential limitations. The description adds some context about the fixed dataset size (39 counties) but lacks comprehensive behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve a distinct purpose: stating what the tool does and when to use it. There is no wasted language or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description provides adequate context about purpose and usage. However, without annotations or output schema, it could benefit from more behavioral details like response format, though this is less critical for a straightforward list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, earning a baseline score of 4 for zero-parameter tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all 39 Washington State counties'), making the purpose specific and unambiguous. It distinguishes from siblings by focusing on a specific geographic dataset rather than general resources or categories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'to find the correct county name for filtering search results.' This provides clear context and distinguishes it from sibling tools that likely serve different purposes like general searching or listing categories.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_resourcesAInspect

Search cancer support resources in Washington State. Filter by category (housing, food, mental-health, financial, legal, care, transport, children, rights, programs, education), county, tier, or free-text query. Returns matching resources with name, description, phone, URL, eligibility, and cost.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNocancer = cancer-specific programs, community = general programs that serve cancer patients
limitNoMax results (1-50). Default: 10
queryNoFree-text search across name, description, tags, keywords
countyNoWashington State county name (e.g., King, Spokane, Pierce)
categoryNoFilter by support category
languageNoLanguage for returned text. Default: en
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes the search functionality and return format ('Returns matching resources with name, description, phone, URL, eligibility, and cost'), but doesn't mention important behavioral aspects like pagination, rate limits, authentication requirements, error conditions, or whether this is a read-only operation. The description covers basic functionality but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly sized and front-loaded with the core purpose in the first clause. Every sentence earns its place: the first establishes scope and action, the second lists filter options, and the third describes return format. There's zero waste or redundancy, making it highly efficient for agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, search functionality) and no output schema, the description does well by specifying the return format fields. However, with no annotations and no output schema, it should ideally mention more behavioral aspects like whether this is a read-only operation, typical response times, or error handling. It's mostly complete but could benefit from more operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds minimal value beyond the schema by mentioning the same filter categories and free-text search capability. It doesn't provide additional syntax, format details, or usage examples beyond what's already in the parameter descriptions. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search cancer support resources in Washington State'), identifies the resource type ('resources'), and distinguishes it from siblings by focusing on filtered searching rather than getting single resources or listing categories/counties. It provides a comprehensive scope that differentiates it from get_resource (single item retrieval) and list_categories/list_counties (metadata listing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('Filter by category, county, tier, or free-text query'), but doesn't explicitly state when NOT to use it or name specific alternatives. It implies usage for filtered searching rather than simple listing or single-item retrieval, but lacks explicit comparison to sibling tools like get_resource for single items or list_categories for metadata-only queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources