Nursing Home Database
Server Details
MCP server for US nursing facility search and ownership lookup (NursingHomeDatabase).
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- jmtroller/nhd-mcp-public-documentation
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose with no ambiguity. The tools target specific resources (facilities, owners, data freshness) and actions (get, search), making it easy for an agent to select the right tool for each task without confusion.
All tools follow a consistent verb_noun pattern, with 'get_' and 'search_' prefixes clearly indicating the action. The naming is uniform and predictable, enhancing readability and usability across the tool set.
With 6 tools, the server is well-scoped for a nursing home database. Each tool serves a clear purpose, covering key operations like retrieving and searching facilities and owners, without being overly sparse or bloated.
The tool set provides strong coverage for querying and searching facilities and owners, including data freshness checks. A minor gap exists in update/delete operations, but this is reasonable for a database-focused server where agents can work around it for typical use cases.
Available Tools
6 toolsget_data_freshnessBInspect
Get latest source dataset dates and counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It identifies that dates and counts are returned, but omits the data source scope, update frequency, date format, or whether the operation is cached versus real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently front-loaded with no filler words. Every term ('latest', 'source dataset', 'dates', 'counts') contributes meaningfully to understanding the tool's scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool, the description is minimally adequate. However, given the absence of an output schema, it should ideally describe the return structure (e.g., whether it returns a list of datasets with timestamps or a summary object).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which per rubric establishes a baseline of 4. The description correctly implies no filtering is needed ('Get latest...' without qualification), matching the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and identifies the resource ('source dataset dates and counts'), clearly distinguishing this metadata tool from its facility/owner-centric siblings. However, it could specify which datasets (facility vs owner) are being monitored.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to invoke this tool versus the data retrieval siblings (get_facility, search_facilities, etc.). It does not indicate whether this should be called before querying to verify freshness, or what constitutes 'stale' data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_facilityBInspect
Get one facility by provnum or web slug.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the lookup methods but fails to disclose error behavior (e.g., if facility not found), return structure, or whether this is a safe read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at 7 words, front-loaded with the action ('Get one facility'), and contains zero redundancy. Every word earns its place by conveying the resource, cardinality, and input format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description is insufficiently complete. It fails to describe what facility data is returned, error conditions, or the domain meaning of 'provnum', leaving significant gaps for an agent attempting to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides crucial semantic context by indicating the 'id' parameter accepts either a 'provnum' or 'web slug'. This significantly compensates for the undescribed schema, though it doesn't explain what a 'provnum' actually represents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific action ('Get') and resource ('facility'), and distinguishes this from sibling search_facilities by specifying 'one facility' and the lookup mechanism ('by provnum or web slug'). However, it could explicitly contrast with the search tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying the lookup keys ('provnum or web slug'), suggesting this tool is for direct identifier lookups rather than broad searches. However, it lacks explicit when-to-use guidance or explicit naming of alternatives like search_facilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_facility_ownershipCInspect
Get ownership records for one facility.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description fails to specify error behaviors (e.g., facility not found), authentication requirements, or whether this returns current or historical ownership data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no redundancy or wasted words. The core action is front-loaded. However, extreme brevity comes at the cost of omitting necessary context, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks essential context for a tool with no output schema or annotations: it does not describe what constitutes an 'ownership record' (fields, structure), error scenarios, or the relationship between this tool and its siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the 'id' parameter, the description inadequately compensates by failing to explicitly state that 'id' refers to the facility identifier, its expected format, or how to obtain it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States a specific verb ('Get') and resource ('ownership records for one facility'), clearly distinguishing from sibling tools like get_facility (facility details) and get_owner (owner details). However, it does not explicitly clarify the relationship between these tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like get_owner or search_facilities, nor does it mention prerequisites such as needing a valid facility ID from prior searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ownerCInspect
Get owner portfolio by web_owner slug.
| Name | Required | Description | Default |
|---|---|---|---|
| web_owner | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It fails to disclose what 'portfolio' includes, error handling (e.g., invalid slug), data freshness, or authorization requirements. It only minimally describes the lookup mechanism.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no wasted words. Front-loaded with verb and resource, immediately communicating the core operation without preamble.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one primitive parameter), the description still falls short by failing to describe the return structure (what constitutes a 'portfolio'), error states, or relationships to sibling tools. Without an output schema, this omission is significant.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by clarifying that 'web_owner' is a 'slug' identifier. However, it does not describe the slug format, valid values, or provide examples, leaving significant semantic gaps for the single required parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States a clear action ('Get') and resource ('owner portfolio'), and implies specific lookup by identifier ('slug') versus the sibling 'search_owners' which likely performs broader queries. However, it does not explicitly clarify the difference between 'portfolio' and basic owner details or ownership records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this tool versus 'search_owners' (e.g., 'use when you have the exact slug') or 'get_facility_ownership'. No prerequisites or error conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_facilitiesCInspect
Search nursing facilities by name, geography, or distance from a free-form address.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | ||
| zip | No | ||
| city | No | ||
| sort | No | ||
| limit | No | ||
| state | No | ||
| offset | No | ||
| address | No | ||
| radius_miles | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Mentions search modalities but fails to describe return format, pagination behavior (despite offset/limit params), rate limits, or matching logic (partial vs exact). 'Free-form address' hints at geocoding behavior but doesn't confirm it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 11 words with zero redundancy. Front-loaded with action verb. However, extreme brevity becomes a liability given the schema complexity (9 parameters with no descriptions), suggesting it should be longer to compensate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a 9-parameter search tool with no output schema and no annotations. Missing critical context: return value structure, whether address requires radius_miles, sort field options, and default limits. Description covers approximately 20% of what an agent needs to invoke this effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, description must compensate. It conceptually maps 'name' to q, 'geography' to city/state/zip, and 'distance' to address/radius_miles, but doesn't explicitly document these mappings or explain the sort parameter options. Provides baseline context but insufficient detail for 9 undocumented parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Search') + specific resource ('nursing facilities') + search modalities ('by name, geography, or distance'). Distinguishes from sibling get_facility by indicating this is a broad search operation rather than specific retrieval, though it doesn't explicitly contrast the two.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus get_facility (for specific ID lookups) or search_owners. Doesn't mention that all 9 parameters are optional or provide strategy for combining filters (e.g., whether geography filters stack as AND or OR conditions).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_ownersCInspect
Search owners by name.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | ||
| limit | No | ||
| offset | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure but only implies read-only safety through the word 'Search'. It fails to specify search semantics (substring vs prefix, case sensitivity), pagination behavior, or what constitutes an 'owner' in this domain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While the single sentence is brief and front-loaded, it is inappropriately terse given the complete absence of schema descriptions and annotations. The brevity wastes opportunity to clarify behavioral details and parameter usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three undocumented parameters, no annotations, no output schema, and available sibling tools, the description is inadequate. It should explain the search pattern matching rules, pagination usage, and return value structure (list vs single object).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. It implicitly maps 'by name' to the 'q' parameter but completely omits documentation for 'limit' and 'offset' pagination parameters, leaving their purpose and usage patterns unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search') and target resource ('owners') with the specific field ('by name'). However, it does not explicitly differentiate from sibling tool 'get_owner' (which likely retrieves by ID), though the verb choice implies different use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'get_owner'. The description does not indicate whether this supports partial name matching, exact matches, or when to prefer searching over direct retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!