medical-terminologies-mcp
Server Details
Unified access to global medical terminologies: ICD-11, SNOMED CT, LOINC, RxNorm, MeSH, ATC, CID-10
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SidneyBissoli/medical-terminologies-mcp
- GitHub Stars
- 3
- Server Listing
- Medical Terminologies MCP
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 28 of 28 tools scored. Lowest: 3.7/5.
Every tool targets a distinct combination of medical terminology and operation (e.g., atc_classify vs atc_lookup, icd11_search vs icd11_hierarchy), with clear descriptions that prevent ambiguity. No two tools appear to serve the same purpose.
All tool names follow a consistent pattern of `terminology_verb` or `terminology_noun` (e.g., atc_classify, icd11_chapters, loinc_search, mesh_descriptor, rxnorm_concept), using lowercase and underscores throughout. No mixing of conventions.
With 28 tools spanning six major terminologies plus mappings, the count is slightly above the typical 25-tool threshold but well-justified by the breadth of coverage. Each terminology has a focused set of operations (search, lookup, hierarchy, etc.), so no tool feels redundant.
The tool surface covers essential operations for all represented terminologies (ATC, CID-10, ICD-11, LOINC, MeSH, RxNorm) including search, lookup, hierarchy, and specialized features. Minor gaps include the absence of direct SNOMED CT tools and English ICD-10 support, but the existing tools handle core terminology needs.
Available Tools
28 toolsatc_classifyARead-onlyIdempotentInspect
Look up the WHO ATC (Anatomical Therapeutic Chemical) classification(s) for a drug by name.
Use this tool to:
Find the ATC code for a medication (e.g., "metformin" → A10BA02)
Identify the therapeutic and pharmacological class hierarchy
Cross-reference drugs with their international ATC codes
Returns one entry per ATC code the drug belongs to. A single-ingredient drug typically maps to one substance-level code; combination products map to multiple. ATC codes are international (WHO Collaborating Centre); this tool retrieves them via NLM RxClass.
| Name | Required | Description | Default |
|---|---|---|---|
| drug_name | Yes | Drug name to classify (brand or generic, e.g., "metformin") |
Output Schema
| Name | Required | Description |
|---|---|---|
| matches | Yes | |
| drug_name | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds that results come from NLM RxClass, that single-ingredient drugs map to one code while combination products map to multiple, and that codes are international. This adds useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet complete, using a clear introductory sentence followed by bullet points for usage guidance and important notes. Every sentence adds information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but known to exist), the description adequately covers the tool's purpose, usage, behavior, and parameter. It explains the return format and distinguishes between single and combination products, making it complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter drug_name, but the description adds value by explaining that brand or generic names are acceptable and providing an example. This goes beyond the schema's minimal description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool looks up WHO ATC classifications for a drug by name. It provides a concrete example (metformin → A10BA02) and explains the return format (one entry per ATC code). This distinguishes it from sibling tools like atc_lookup or atc_members.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this tool to:' and lists three specific use cases, providing clear context. However, it does not mention when not to use this tool or explicitly compare with sibling tools, so a 4 is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
atc_lookupARead-onlyIdempotentInspect
Look up an ATC code at level 1-4 to get its name and hierarchy level.
Use this tool to:
Resolve an ATC code (e.g., "A10BA") to its class name ("Biguanides")
Confirm a code exists in the current ATC index
Identify the level (anatomical / therapeutic / pharmacological / chemical)
Accepts codes 1-5 characters long: "A" (anatomical), "A10" (therapeutic), "A10B" (pharmacological), "A10BA" (chemical). Substance-level codes (7 chars, e.g., "A10BA02") are not exposed by this endpoint — use atc_classify with the drug name to retrieve the substance code.
| Name | Required | Description | Default |
|---|---|---|---|
| atc_code | Yes | ATC code at level 1-4 (1-5 chars). Substance-level codes (7 chars, e.g., A10BA02) are not exposed by this endpoint — use atc_classify with the drug name instead. |
Output Schema
| Name | Required | Description |
|---|---|---|
| found | Yes | |
| details | Yes | |
| atc_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint all safe. Description adds value by specifying accepted code lengths (1-5 chars) and the four hierarchy levels, which helps the agent understand constraints beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a clear structure: a one-sentence action statement followed by a bullet list of use cases and a specification of allowed formats. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 parameter, output schema exists), the description covers all necessary context: what the tool does, accepted inputs, expected behavior, and alternative tools for unsupported inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaningful context by explaining code levels, providing examples, and explicitly stating that substance-level codes are not supported, which goes beyond the schema's pattern description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it looks up an ATC code at levels 1-4 to get its name and hierarchy level, with specific examples (e.g., 'A10BA' resolves to 'Biguanides'). It distinguishes from sibling tools like atc_classify, which handles substance-level codes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists when to use: resolving codes, confirming existence, identifying level. Also clearly states what not to do: substance-level codes (7 chars) should use atc_classify with drug name, providing an alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
atc_membersARead-onlyIdempotentInspect
List the drugs (substances) that belong to an ATC class.
Use this tool to:
Enumerate all members of a therapeutic class (e.g., "A10BA" → metformin, phenformin)
Build a list of drugs sharing a pharmacological mechanism
Explore an ATC subtree at any level
Each member includes its substance-level (7-char) ATC code via source_atc_code, useful for disambiguation when the queried class is at level 1-4. RxNorm's catalog is US-centric; the ATC class names and codes themselves are international.
| Name | Required | Description | Default |
|---|---|---|---|
| atc_code | Yes | ATC code at any level. Higher levels (1-4) return all member substances; level 5 returns the single substance. |
Output Schema
| Name | Required | Description |
|---|---|---|
| members | Yes | |
| atc_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds behavioral context beyond annotations: mentions source_atc_code for disambiguation at higher levels, RxNorm US-centricity, and the fact that level 5 returns a single substance. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short paragraphs with clear structure: core purpose, use cases, and additional context. No redundant sentences. Essential information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, read-only, idempotent) and presence of output schema, the description fully covers what an agent needs: purpose, use cases, level behavior, and data source limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Single parameter atc_code is fully covered by schema with pattern and description. Description adds information about level behavior (higher levels return multiple substances, level 5 returns one) that enriches understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'list' and resource 'drugs/substances that belong to an ATC class'. It distinguishes from siblings like atc_lookup and atc_classify by focusing on membership enumeration. Provides specific examples and behavior at different levels.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit use cases are listed (enumerate members, build lists, explore subtree) and behavior by level is explained. However, no direct guidance on when not to use this tool compared to siblings, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cid10_chapterARead-onlyIdempotentInspect
Get one CID-10 chapter and its constituent groups (e.g., "Chapter IX → I00-I02 Febre reumática aguda, I05-I09 Doenças reumáticas crônicas do coração, ...").
Use this tool to:
Drill from a chapter into its groups
Build hierarchical browsers
Find which group contains a code range
Provide a chapter number (1-22).
| Name | Required | Description | Default |
|---|---|---|---|
| num | Yes | Chapter number (1-22). CID-10 V2008 has 22 chapters. |
Output Schema
| Name | Required | Description |
|---|---|---|
| num | Yes | |
| found | Yes | |
| groups | Yes | |
| chapter | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint, so the description adds value by specifying the output structure (groups with code ranges). It does not contradict annotations and provides behavioral context beyond what annotations alone offer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: first describes output, second lists use cases, third specifies input format. Front-loaded with the most critical information, no redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists (handling return values), the description covers input parameter and use cases adequately. It could be slightly more detailed about the hierarchical nature, but overall sufficient for a simple lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'num' is fully described in the input schema (range 1-22, CID-10 V2008). The description merely restates 'Provide a chapter number (1-22)' without adding new semantics. With 100% schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states it retrieves a CID-10 chapter and its constituent groups, with an illustrative example of nested groups. This clearly distinguishes it from sibling tools like cid10_chapters (which lists all chapters) and cid10_lookup (which likely works with specific codes).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists three specific use cases: drilling from chapter to groups, building hierarchical browsers, and finding which group contains a code range. It also specifies that the input is a chapter number (1-22). However, it does not explicitly exclude alternative tools or provide when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cid10_chaptersARead-onlyIdempotentInspect
List the 22 chapters of CID-10 with their code ranges and Portuguese titles.
Use this tool to:
See the top-level structure of CID-10 (chapters I-XXII, e.g., "I. Algumas doenças infecciosas e parasitárias", "IX. Doenças do aparelho circulatório")
Map a code to its chapter by code range (e.g., I00-I99 → chapter IX)
Build a navigable table of contents for downstream tooling
Returns 22 entries — CID-10 V2008 has not been updated since 2008.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| chapters | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint false. The description adds that it returns exactly 22 entries and that the data is static (CID-10 V2008 not updated since 2008), which is useful context beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—three sentences plus bullet points. Every sentence adds value: what it lists, use cases, and a note on data freshness. No fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless list tool with an output schema (indicated by context), the description covers the purpose, use cases, output content, and data version. The absence of output schema details is acceptable since the schema itself would provide that. The description is complete for its role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and the input schema is empty. The description does not need to explain parameters. The baseline for 0 parameters is 4, and the description adds value by specifying the output contents (code ranges, Portuguese titles).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all 22 CID-10 chapters with code ranges and titles, and provides three specific use cases (view structure, map codes, build TOC). It distinguishes itself from siblings like cid10_chapter (singular) and cid10_lookup by focusing on the full list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists when to use the tool with bullet points (see structure, map code, build TOC). Does not explicitly state when not to use it, but the context implies that for detailed chapter data, one would use cid10_chapter. A clear exclusion would improve the score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cid10_lookupARead-onlyIdempotentInspect
Look up a specific CID-10 code and return its Portuguese name.
Use this tool to:
Resolve a code to its Brazilian description ("I21" → "Infarto agudo do miocárdio")
Confirm a 3-char category or 4-char subcategory exists in CID-10
Retrieve gender / cause-of-death restriction flags when applicable
Accepts both dotted ("A00.1") and undotted ("A001") forms; returns the canonical display.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | CID-10 code (e.g., "A00", "A00.1", "A001", "I21"). Dotted and undotted forms both accepted. |
Output Schema
| Name | Required | Description |
|---|---|---|
| hit | Yes | |
| code | Yes | |
| found | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses additional behaviors beyond annotations: accepts dotted/undotted forms ('A00.1'/'A001'), returns canonical display. Annotations (readOnlyHint, destructiveHint=false, etc.) are consistent and complemented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise with three bullet points and an input format line. No redundancy; every sentence provides unique, actionable information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter, present output schema, and low complexity, the description fully covers what the tool does, when to use it, and what it returns (Portuguese name, restriction flags). No gaps identified.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes the 'code' parameter well (pattern, examples). Description adds value with concrete examples of valid codes and clarifies dotted/undotted acceptance, enhancing schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Look up a specific CID-10 code and return its Portuguese name' with concrete examples like 'I21 → Infarto agudo do miocárdio'. It distinguishes from siblings (e.g., cid10_search, cid10_chapter) by focusing on code resolution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists three use cases ('Resolve a code', 'Confirm a...code exists', 'Retrieve...restriction flags'). Does not explicitly state when not to use or name alternatives, but sibling context provides implicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cid10_searchARead-onlyIdempotentInspect
Search the Brazilian CID-10 (Classificação Estatística Internacional de Doenças, 10ª Revisão) by Portuguese text.
Use this tool to:
Find CID-10 codes for Brazilian SUS / ANVISA contexts ("infarto", "diabetes", "tuberculose")
Look up the official Portuguese (CBCD/USP) translation of a clinical term
Locate codes for billing, epidemiology, and clinical documentation in Brazil
Returns matches from CID-10 categories (3-char) and/or subcategories (4-char). Search is diacritic-insensitive: typing "infeccoes" matches "infecções". This tool searches the Brazilian Portuguese CID-10 V2008 — for the international ICD-11 (current WHO revision, in English by default), use icd11_search.
| Name | Required | Description | Default |
|---|---|---|---|
| level | No | Restrict search to 3-char categories, 4-char subcategories, or both. Default: all | all |
| query | Yes | Search term in Portuguese (e.g., "diabetes", "infarto", "tuberculose") | |
| max_results | No | Maximum number of results (1-100). Default: 25 |
Output Schema
| Name | Required | Description |
|---|---|---|
| hits | Yes | |
| level | Yes | |
| query | Yes | |
| shown_count | Yes | |
| total_count | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint=false. The description adds valuable details: diacritic-insensitive search, returns matches from categories/subcategories, and specifies version V2008.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise and well-organized with bullet points. Each sentence adds value; no redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists and all parameters are documented, the description fully covers usage context, version, language, differentiation from siblings, and behavioral traits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for each parameter. The description adds context beyond schema: diacritic-insensitive search and explicit version mention, which helps agent understand search behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches Brazilian CID-10 by Portuguese text and lists specific use cases (SUS/ANVISA, translation, billing, epidemiology). It distinguishes itself from the sibling tool icd11_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (Brazilian CID-10 for SUS/ANVISA, diacritic-insensitive, V2008) and when not to use (international ICD-11, direct to icd11_search).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_equivalentARead-onlyIdempotentInspect
Search for equivalent terms across multiple medical terminologies.
Use this tool to:
Find the same concept in different coding systems
Compare how terminologies represent a concept
Support terminology mapping and data integration
Searches across: ICD-11, SNOMED CT, LOINC, RxNorm, and MeSH. Set target_terminologies to limit which are searched, or set source_terminology to exclude one (e.g. when you already have a code from that terminology and want equivalents elsewhere). The two combine: source is subtracted from targets.
| Name | Required | Description | Default |
|---|---|---|---|
| term | Yes | Medical term to search (e.g., "diabetes", "aspirin") | |
| source_terminology | No | If set, this terminology is excluded from the search. Use this when the term came from this terminology and you want equivalents in the others. Combines with target_terminologies by subtraction (source is removed from the target list). | |
| target_terminologies | No | Limit the search to these terminologies. If omitted, all five are searched. |
Output Schema
| Name | Required | Description |
|---|---|---|
| term | Yes | |
| results | Yes | |
| source_terminology | Yes | |
| searched_terminologies | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds clarity on how source_terminology and target_terminologies combine (subtraction). No contradictions. It does not mention rate limits or error handling, but given the annotations and output schema, it is sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with clear bullet points front-loading the main use. Every sentence serves a purpose, and there is no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of output schema and detailed annotations, the description covers all necessary aspects: purpose, terminologies involved, parameter behavior, and usage context. It is complete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions, so baseline is 3. The description adds value by explaining how source_terminology and target_terminologies interact (subtraction), which goes beyond the schema's individual descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for equivalent terms across multiple medical terminologies, listing specific ones (ICD-11, SNOMED CT, etc.) and use cases like mapping and integration. This distinguishes it from sibling tools that focus on single terminology lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains how to use the tool with parameters (limiting target terminologies or excluding a source) and provides an example. It implies when to use this tool (for cross-terminology mapping) vs sibling tools (single terminology lookups), but does not explicitly state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
icd11_chaptersARead-onlyIdempotentInspect
List all ICD-11 chapters (top-level categories).
Use this tool to:
Get an overview of ICD-11 structure
Find which chapter covers a body system or condition type
Navigate to specific disease categories
ICD-11 has 28 chapters covering all areas of medicine.
| Name | Required | Description | Default |
|---|---|---|---|
| language | No | Language code (default: en) | en |
Output Schema
| Name | Required | Description |
|---|---|---|
| chapters | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds minimal behavioral context beyond the name and annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with three short sentences plus a bullet list. It is front-loaded and avoids unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one optional parameter and an existing output schema, the description is adequate. It explains what the tool does and when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single optional 'language' parameter with enum and default. Description does not add parameter-level detail, but baseline 3 is appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'List all ICD-11 chapters (top-level categories)'. It clearly differentiates from siblings like icd11_hierarchy and icd11_lookup by focusing on top-level categories only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear use cases: overview of ICD-11 structure, finding a chapter, and navigating to disease categories. It does not explicitly mention when not to use or alternatives, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
icd11_hierarchyARead-onlyIdempotentInspect
Navigate the ICD-11 hierarchy to find parent or child entities.
Use this tool to:
Find broader categories (parents) of a condition
Find specific subtypes (children) of a condition
Understand the classification structure
Direction 'parents' returns ancestor categories, 'children' returns subcategories.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | ICD-11 code to get hierarchy for | |
| direction | Yes | Direction: "parents" for ancestors, "children" for subtypes |
Output Schema
| Name | Required | Description |
|---|---|---|
| code | Yes | |
| entities | Yes | |
| direction | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and openWorldHint, covering safety and behavior. The description adds context about returning ancestors or subcategories, but does not provide additional behavioral detail beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using a few clear sentences and bullet points. It is front-loaded with the main purpose and avoids unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple two-parameter schema and existence of an output schema, the description adequately covers the tool's functionality. It explains the tool's purpose, direction options, and use cases comprehensively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds further clarity by explaining the 'direction' values as 'ancestors' vs 'subcategories', enhancing understanding beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool navigates the ICD-11 hierarchy to find parent or child entities, with specific use cases. It distinguishes from sibling tools like icd11_lookup and icd11_search by focusing on hierarchical relationships.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear use cases for when to use this tool, but does not explicitly compare to alternatives or state when not to use it. The context of sibling tool names implies differentiation, but explicit guidance is lacking.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
icd11_lookupARead-onlyIdempotentInspect
Get detailed information about a specific ICD-11 entity by code or URI.
Use this tool to:
Get the full definition of a disease
Retrieve coding notes and exclusions
Get the official title and synonyms
Provide either an ICD-11 code (e.g., "BA00") or a full foundation URI.
| Name | Required | Description | Default |
|---|---|---|---|
| uri | No | Full ICD-11 foundation URI | |
| code | No | ICD-11 code (e.g., "BA00", "1A00") | |
| language | No | Language code (default: en) | en |
Output Schema
| Name | Required | Description |
|---|---|---|
| uri | Yes | |
| code | Yes | |
| title | Yes | |
| block_id | Yes | |
| class_kind | Yes | |
| code_range | Yes | |
| definition | Yes | |
| exclusions | Yes | |
| inclusions | Yes | |
| browser_url | Yes | |
| coding_note | Yes | |
| index_terms | Yes | |
| long_definition | Yes | |
| diagnostic_criteria | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds value by detailing the type of information returned (definition, notes, exclusions, title, synonyms), which goes beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is succinct with a clear structure: a one-sentence purpose, followed by bullet points of uses, and a final instruction. Every sentence is meaningful and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description adequately covers what the tool does and what parameters to use. It provides enough context for an agent to decide when to invoke this lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description clarifies that either 'code' or 'uri' should be provided, not both, which adds semantic guidance beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get detailed information about a specific ICD-11 entity by code or URI.' It lists the specific information retrieved (definition, coding notes, exclusions, title, synonyms), distinguishing it from sibling tools like search or hierarchy.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides concrete use cases (e.g., 'Get the full definition of a disease') and instructs to provide either a code or URI. However, it does not explicitly mention when not to use it or compare it to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
icd11_postcoordinationARead-onlyIdempotentInspect
Get postcoordination information for an ICD-11 code.
Use this tool to:
Find available axes for building composite codes
Check required vs optional postcoordination
Understand code extension possibilities
Postcoordination allows adding severity, laterality, anatomy, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | ICD-11 code to get postcoordination info for |
Output Schema
| Name | Required | Description |
|---|---|---|
| axes | Yes | |
| code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint false. Description adds minimal extra behavioral context beyond explaining what postcoordination is. No contradiction, but bar is lower due to rich annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is short (3 sentences plus bullet points), front-loaded with the main purpose, and every sentence adds value. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but exists), the description adequately covers purpose, use cases, and parameter. It is sufficient for a simple tool with one parameter and clear annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single 'code' parameter with a clear description. The tool description does not add new semantics beyond the schema, so score is baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Get postcoordination information for an ICD-11 code' and lists specific use cases (axes, required vs optional, extensions). This clearly distinguishes it from sibling tools like icd11_lookup or icd11_hierarchy by focusing on postcoordination.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description lists use cases but does not explicitly state when not to use this tool or name alternatives. The context is clear, but there is no exclusion guidance, which is a gap for a tool with many siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
icd11_searchARead-onlyIdempotentInspect
Search for medical conditions, diseases, and health problems in ICD-11 (International Classification of Diseases, 11th Revision).
Use this tool to:
Find ICD-11 codes for diagnoses
Search for diseases by name or keyword
Look up conditions in multiple languages
Returns matching entities with codes, titles, and relevance scores.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search text (disease name, symptom, or keyword) | |
| language | No | Language code (default: en) | en |
| max_results | No | Maximum number of results (1-100). Default: 25 |
Output Schema
| Name | Required | Description |
|---|---|---|
| query | Yes | |
| entities | Yes | |
| total_count | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint=false, so the safety profile is clear. The description adds that returns include codes, titles, and relevance scores, which is helpful but not extensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with effective use of bullet points to highlight use cases. Every sentence adds value, and it is front-loaded with the key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists, the description appropriately mentions the return type (codes, titles, relevance scores). No additional information is needed for correct invocation, and the complexity is moderate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no extra information beyond what is already in the parameter descriptions, such as 'Search text (disease name, symptom, or keyword)'. It does not compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for medical conditions, diseases, and health problems in ICD-11. It lists specific use cases like finding codes and searching by keyword, and distinguishes from sibling tools that focus on hierarchies or lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage scenarios with bullet points (find codes, search diseases, lookup conditions). It does not explicitly mention when not to use it or alternative tools, but the context is sufficient for correct usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
loinc_answersARead-onlyIdempotentInspect
Get the list of valid answers for a LOINC questionnaire item.
Use this tool to:
Find valid response options for survey questions
Get answer codes for data entry validation
Look up standardized answer lists
Only applicable to LOINC codes that represent questions with defined answer sets.
| Name | Required | Description | Default |
|---|---|---|---|
| loinc_num | Yes | LOINC number (e.g., "2339-0") |
Output Schema
| Name | Required | Description |
|---|---|---|
| answers | Yes | |
| loinc_num | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, idempotent, non-destructive behavior. The description adds that it returns a 'list of valid answers' but does not elaborate on pagination, error handling, or data format, providing only minimal additional behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with four bullet points that are front-loaded with the main purpose. While it could be slightly more compressed, it efficiently conveys essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with an output schema, the description is sufficiently complete. It covers the core function and usage context, though it could mention edge cases like an empty answer list for codes without defined sets.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers the single parameter with pattern and description (100% coverage). The description mentions LOINC codes representing questions but does not add new semantics about the parameter format or additional constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a list of valid answers for LOINC questionnaire items with specific verb and resource. It also differentiates from siblings like loinc_details and loinc_search by focusing on answer sets for questions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases such as finding valid response options and getting answer codes for validation. It also notes applicability only to LOINC codes with defined answer sets, offering clear context, though it lacks explicit when-not-to-use or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
loinc_detailsARead-onlyIdempotentInspect
Get detailed information about a specific LOINC code.
Use this tool to:
Get the full name and description of a LOINC code
Find the component, property, timing, and system
Check the scale type and method
Provide a LOINC number in format "XXXXX-X" (e.g., "2339-0" for Glucose).
| Name | Required | Description | Default |
|---|---|---|---|
| loinc_num | Yes | LOINC number (e.g., "2339-0") |
Output Schema
| Name | Required | Description |
|---|---|---|
| class | Yes | |
| status | Yes | |
| system | Yes | |
| property | Yes | |
| component | Yes | |
| loinc_num | Yes | |
| scale_type | Yes | |
| short_name | Yes | |
| method_type | Yes | |
| time_aspect | Yes | |
| long_common_name | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint true and destructiveHint false, indicating safe reads. The description adds behavioral context by requiring a specific LOINC number format (XXXXX-X) and explaining what aspects are returned, which helps the agent avoid misuse.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences with bullet points. The purpose is stated in the first sentence, followed by specific use cases and a format example. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has a single parameter, complete input schema, output schema, and clear annotations, the description fully covers what the agent needs to know to invoke it correctly. It explains the required format and what data will be returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear pattern and description for the loinc_num parameter. The description provides an example (e.g., '2339-0') but does not add significant semantics beyond what the schema already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves detailed information about a specific LOINC code, lists specific attributes (name, component, property, timing, system, scale, method), and distinguishes from sibling tools like loinc_search and loinc_answers which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists three use cases (get full name/description, component/property/timing/system, scale/method) and provides a format example. However, it does not mention when not to use this tool or suggest alternatives when another tool would be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
loinc_panelsARead-onlyIdempotentInspect
Get the structure of a LOINC panel or form.
Use this tool to:
See all tests included in a panel (e.g., CBC, metabolic panel)
Get the structure of assessment forms
Find related observations grouped together
Returns the list of LOINC codes that make up the panel.
| Name | Required | Description | Default |
|---|---|---|---|
| loinc_num | Yes | LOINC number (e.g., "2339-0") |
Output Schema
| Name | Required | Description |
|---|---|---|
| panel | Yes | |
| loinc_num | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint true, and destructiveHint false. The description adds that it returns a list of LOINC codes making up the panel, which is useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (5 sentences), front-loaded with purpose, followed by usage bullets, and ends with return value description. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (1 parameter with full coverage), rich annotations, and presence of an output schema, the description adequately covers the tool's purpose, usage, and return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with a clear pattern and example for the single parameter. The tool description does not add additional semantic meaning for the parameter beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it gets the structure of a LOINC panel or form, with examples like CBC and metabolic panel, and distinguishes from sibling tools like loinc_search and loinc_details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists use cases (see tests in a panel, get structure of forms, find related observations), providing clear context for when to use it, though it lacks explicit when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
loinc_searchARead-onlyIdempotentInspect
Search for laboratory tests, clinical observations, and measurements in LOINC (Logical Observation Identifiers Names and Codes).
Use this tool to:
Find LOINC codes for lab tests (e.g., "glucose", "hemoglobin")
Search for clinical measurements and vital signs
Look up diagnostic observations
Returns matching LOINC codes with names, components, and properties.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search term (test name, keyword, or partial LOINC code) | |
| max_results | No | Maximum number of results (1-100). Default: 25 |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes | |
| query | Yes | |
| shown_count | Yes | |
| total_count | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, covering safety and idempotency. The description adds the return structure (matching LOINC codes with names, components, properties), but does not disclose any additional behavioral traits like result ranking, pagination, or data freshness. With annotations handling the main behavioral traits, the description provides adequate but not rich context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences and a short bullet list. Every word earns its place. The main purpose is front-loaded in the first sentence, and the bullet list quickly conveys usage scenarios. No fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of a full output schema (details unknown) and comprehensive annotations, the description is largely complete for a search tool. It states what is returned. However, it does not clarify limitations such as only returning summary information (as hinted by sibling loinc_details) or that results may vary due to openWorldHint. These are minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so the schema already documents meanings for both parameters. The description does not add any new insights or examples beyond the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches for LOINC codes related to lab tests, clinical observations, and measurements. The verb 'search' is specific and the resource (LOINC codes) is well-defined. However, it does not explicitly differentiate from sibling tools like loinc_details or loinc_answers, which could also return LOINC data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists three use cases (finding LOINC codes for lab tests, clinical measurements, diagnostic observations) in a clear bullet-point format. Provides good context on when to use the tool. However, it lacks explicit guidance on when not to use it or alternative tools for specific needs (e.g., retrieving full details).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
map_icd10_to_icd11ARead-onlyIdempotentInspect
This tool runs the ICD-10 code as a query string against the ICD-11 search index. The search matches the code against ICD-11 entity titles, definitions, and synonyms; it does not consult any curated ICD-10 → ICD-11 mapping. Results are search hits, not authoritative mappings.
For authoritative ICD-10 → ICD-11 mappings (clinical coding, billing, migration projects), consult the WHO transition tables at https://icd.who.int/browse11/Downloads/Download.
Use this tool for exploratory lookups: confirming a code exists in ICD-11 text, finding ICD-11 entities whose descriptions reference an ICD-10 code, or seeding a manual mapping review. Do not present the results as ICD-10 → ICD-11 equivalents to clinical or billing consumers.
Provide a code like "E11" (Type 2 diabetes) or "I21" (Acute MI).
| Name | Required | Description | Default |
|---|---|---|---|
| icd10_code | Yes | ICD-10 code to query in the ICD-11 search index (e.g., E11, I21.0, J18.9) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations by clarifying that results are search hits and not authoritative mappings, and that the tool does not consult any curated mapping. This enriches the agent's understanding of the tool's limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with three distinct paragraphs: functionality, authoritative alternative, and usage guidance. It is front-loaded with the core purpose. While slightly verbose, each sentence contributes meaningfully, and the structure aids comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mapping between code systems), the description effectively covers what the tool does, its limitations, and appropriate use cases. With annotations covering safety and no output schema, the description provides sufficient context for an agent to understand the tool's role among siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides a detailed description and examples for the single parameter icd10_code. The description adds minimal additional value by restating the example codes and suggesting format, but does not significantly expand beyond the schema's existing coverage (100%). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool runs an ICD-10 code as a query against the ICD-11 search index, distinguishing it from curated mappings. It specifies the matching scope (titles, definitions, synonyms) and explicitly separates itself from authoritative mapping tools, which aligns well with its name and differentiates it from siblings like icd11_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool (exploratory lookups, confirming code existence, seeding manual review) and when not to (do not present as equivalents for clinical/billing). It also directs users to the authoritative WHO transition tables for proper mappings, offering a clear alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
map_loinc_to_snomedARead-onlyIdempotentInspect
This tool looks up a LOINC code in NLM Clinical Tables and returns guidance on where to obtain a LOINC → SNOMED CT mapping. It does not perform the mapping.
Direct LOINC → SNOMED CT mappings are not freely available via API. UMLS Metathesaurus contains the relationships but requires an individual UMLS Terminology Services license; the LOINC SNOMED CT Expression Association is published by Regenstrief Institute as part of the LOINC release and requires authenticated download from loinc.org under the LOINC license.
For programmatic LOINC → SNOMED mapping, use UMLS or the LOINC Expression Association files. For interactive lookup, use the SNOMED CT browser available to your organization or the Regenstrief RELMA desktop tool.
Provide a LOINC code like "2339-0" (Glucose) or "718-7" (Hemoglobin).
| Name | Required | Description | Default |
|---|---|---|---|
| loinc_code | Yes | LOINC code (e.g., 2339-0 for Glucose) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnlyHint, idempotentHint), the description adds critical behavioral context: it does not perform the mapping, returns guidance only, and explains licensing limitations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. Every sentence adds value, including limitations and alternatives, without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (lookup of mapping guidance) and absence of output schema, the description fully informs the agent about what to expect, what not to expect, and how to use results. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% with pattern and example. The description adds meaningful context by explaining the purpose of the parameter and providing concrete examples (e.g., '2339-0'), which helps the agent form valid inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it looks up a LOINC code and returns guidance on obtaining a mapping, explicitly stating it does not perform the mapping. This distinguishes it from sibling tools like loinc_search or map_icd10_to_icd11.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use and when-not-to-use guidance, including alternatives such as UMLS, LOINC Expression Association files, SNOMED CT browser, and RELMA. This helps the agent decide correctly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mesh_descriptorARead-onlyIdempotentInspect
Get detailed information about a MeSH descriptor by ID.
Use this tool to:
Get the full definition (scope note) of a MeSH term
View tree numbers showing hierarchy location
See related concepts and synonyms
Provide a MeSH Descriptor ID like "D015242" (Ofloxacin).
| Name | Required | Description | Default |
|---|---|---|---|
| mesh_id | Yes | MeSH Descriptor ID (e.g., D015242, D003920) |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| uri | Yes | |
| label | Yes | |
| concepts | Yes | |
| qualifiers | Yes | |
| scope_note | Yes | |
| tree_numbers | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint false. The description adds value by outlining the type of information returned (scope note, tree numbers), which informs the agent of the tool's nondestructive, read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with a clear opening sentence, bullet-pointed uses, and an example. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity, the existence of an output schema, and comprehensive annotations, the description fully covers what an agent needs to know to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides 100% coverage with pattern and description. The description reinforces this with a concrete example ('D015242 (Ofloxacin)'), which helps an agent understand the expected format beyond the pattern.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Get detailed information about a MeSH descriptor by ID' and lists specific uses (scope note, tree numbers, related concepts). It clearly distinguishes from sibling tools like mesh_search and mesh_tree.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides concrete use cases in bullet points and an example ID. While it doesn't explicitly state when not to use it, the sibling tools (e.g., mesh_search for queries, mesh_tree for hierarchy) imply the tool's specific role.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mesh_qualifiersARead-onlyIdempotentInspect
Get allowed qualifiers (subheadings) for a MeSH descriptor.
Use this tool to:
Find which qualifiers can be combined with a descriptor
Build precise MeSH search queries
Understand aspects that can be specified
Qualifiers refine descriptors (e.g., "Diabetes Mellitus/drug therapy").
| Name | Required | Description | Default |
|---|---|---|---|
| mesh_id | Yes | MeSH Descriptor ID (e.g., D015242, D003920) |
Output Schema
| Name | Required | Description |
|---|---|---|
| mesh_id | Yes | |
| qualifiers | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, etc. The description adds that qualifiers refine descriptors, but does not contradict annotations. No additional behavioral details beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences with bullet points, front-loaded with purpose. Every sentence adds value, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with 1 parameter, rich annotations, and output schema, the description is complete and sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a clear description and pattern for mesh_id. The description does not add extra parameter meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns allowed qualifiers for a MeSH descriptor, using specific verb 'Get' and resource 'allowed qualifiers'. It distinguishes from siblings like mesh_descriptor and mesh_search by focusing on qualifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists explicit use cases (find qualifiers, build queries, understand aspects). It does not explicitly state when not to use or provide alternatives, but the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mesh_searchARead-onlyIdempotentInspect
Search for MeSH (Medical Subject Headings) descriptors.
Use this tool to:
Find MeSH terms for indexing medical literature
Look up subject headings for PubMed searches
Find controlled vocabulary terms
Returns matching descriptors with MeSH IDs and labels.
| Name | Required | Description | Default |
|---|---|---|---|
| match | No | Match type: exact, contains, or startswith. Default: contains | contains |
| query | Yes | Search term (e.g., "diabetes", "heart failure") | |
| max_results | No | Maximum number of results (1-100). Default: 25 |
Output Schema
| Name | Required | Description |
|---|---|---|
| match | Yes | |
| query | Yes | |
| descriptors | Yes | |
| total_count | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds only that it returns matching descriptors with IDs and labels, which aligns with annotations. No additional behavioral traits are disclosed beyond what annotations already provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (3 sentences) and front-loaded with the main purpose. It uses bullet-like formatting for use cases but in prose. Every sentence is relevant, though the use cases could be more concisely integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the output schema is present, the description does not need to explain return values. It adequately covers purpose, basic usage, and expected results. For a search tool with three simple parameters, this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already explains 'query', 'match', and 'max_results' clearly. The description does not add any additional meaning or context beyond what is in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search for MeSH descriptors' and lists specific use cases like indexing medical literature and PubMed searches, making the purpose unambiguous and distinct from sibling tools such as cid10_search or icd11_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides when to use the tool (e.g., for MeSH terms, indexing, PubMed) but does not mention when not to use it or explicitly differentiate from sibling tools like loinc_search or rxnorm_search, which offer alternative controlled vocabulary searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mesh_treeARead-onlyIdempotentInspect
Get the tree hierarchy location(s) for a MeSH descriptor.
Use this tool to:
See where a term fits in the MeSH hierarchy
Understand broader/narrower relationships
Find related terms in the same branch
MeSH tree numbers show the hierarchical path (e.g., C14.280.647 for Myocardial Infarction).
| Name | Required | Description | Default |
|---|---|---|---|
| mesh_id | Yes | MeSH Descriptor ID (e.g., D015242, D003920) |
Output Schema
| Name | Required | Description |
|---|---|---|
| mesh_id | Yes | |
| tree_numbers | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly, idempotent, and non-destructive behavior. The description adds context about the output format (tree numbers) and examples, which enriches transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with only essential information. It starts with a clear one-liner, then uses bullet points for use cases, and ends with a clarifying example. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter input and existence of an output schema, the description fully explains the tool's purpose, usage, and output format. It covers all necessary aspects for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (parameter mesh_id has description). The description provides an example of the tree number output but does not add significant new meaning beyond the schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves tree hierarchy location(s) for a MeSH descriptor. It explains the output (tree numbers) and distinguishes from sibling tools like mesh_descriptor or mesh_search by focusing specifically on hierarchical structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists three use cases (seeing hierarchy, broader/narrower terms, related branches). While it does not mention when to avoid this tool or provide alternatives, the context is clear enough for an agent to decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rxnorm_classesARead-onlyIdempotentInspect
Get therapeutic and pharmacologic classes for a drug.
Use this tool to:
Find the drug class (e.g., "Beta-blockers", "NSAIDs")
Identify therapeutic categories
Look up mechanism of action classifications
Returns class IDs, names, and classification sources.
| Name | Required | Description | Default |
|---|---|---|---|
| rxcui | Yes | RxCUI of the drug |
Output Schema
| Name | Required | Description |
|---|---|---|
| rxcui | Yes | |
| classes | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds no new behavioral traits beyond mentioning the return values (class IDs, names, sources). Since annotations carry the burden, the description's contribution is minimal but consistent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise, with a clear one-line purpose statement followed by three bullet points of usage. Every sentence adds value without redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter, existing output schema, and rich annotations, the description fully addresses the tool's functionality. It explains what the tool returns and when to use it, leaving no obvious gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of parameter descriptions with clear regex pattern and description for 'rxcui'. The tool description does not add further meaning beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Get therapeutic and pharmacologic classes for a drug.' It lists specific use cases (drug class, therapeutic categories, mechanism of action) which distinguishes it from sibling tools like rxnorm_concept or rxnorm_ingredients that serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description uses bullet points to indicate when to use the tool (e.g., to find drug class, identify therapeutic categories). While it does not explicitly say when NOT to use it or mention alternatives, the context of sibling tools and the focused purpose provide sufficient guidance for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rxnorm_conceptARead-onlyIdempotentInspect
Get detailed information about a specific RxNorm concept by RxCUI.
Use this tool to:
Get the full name and synonyms for a drug
Check the concept status (active, remapped, etc.)
View related concepts (ingredients, brands, forms)
Provide an RxCUI (RxNorm Concept Unique Identifier) like "161".
| Name | Required | Description | Default |
|---|---|---|---|
| rxcui | Yes | RxNorm Concept Unique Identifier | |
| include_related | No | Include related concepts (ingredients, brands, dose forms) |
Output Schema
| Name | Required | Description |
|---|---|---|
| tty | Yes | |
| name | Yes | |
| rxcui | Yes | |
| status | Yes | |
| synonym | Yes | |
| umlscui | Yes | |
| language | Yes | |
| suppress | Yes | |
| remapped_to | Yes | |
| related_groups | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds behavioral context beyond annotations by specifying the types of information returned (name, synonyms, status, related concepts). No contradictions, so a 4 is justified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, using a short paragraph with a bulleted list. Every sentence adds value, and the main action is front-loaded. No waste, so a 5.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description adequately covers what the tool returns (name, synonyms, status, related concepts). It could mention error handling for invalid RxCUI, but overall it's complete enough for a lookup tool, so a 4.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for both parameters. The description adds value by giving an example RxCUI ('161') and clarifying that related concepts are optionally included. This goes beyond the schema's descriptions, earning a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'detailed information about a specific RxNorm concept by RxCUI', listing specific uses (name, synonyms, status, related concepts). It distinguishes from sibling tools like rxnorm_search (searching) and rxnorm_ingredients (listing ingredients), earning a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (when you have an RxCUI and want detailed info) and provides an example. Although it doesn't explicitly say when not to use or list alternatives, the sibling tools provide implicit context, so a 4 is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rxnorm_ingredientsARead-onlyIdempotentInspect
Get active ingredients for a drug by RxCUI.
Use this tool to:
Find the active ingredients in a medication
Check for single vs. multiple ingredient products
Identify the generic components of brand drugs
Returns ingredient RxCUIs and names.
| Name | Required | Description | Default |
|---|---|---|---|
| rxcui | Yes | RxCUI of the drug |
Output Schema
| Name | Required | Description |
|---|---|---|
| rxcui | Yes | |
| ingredients | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds minimal behavioral context beyond mentioning return values (ingredient RxCUIs and names). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two sentences and a bullet list. It front-loads the main action and uses bullets for details without any redundant or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool with one parameter, rich annotations, and an output schema, the description is complete. It covers purpose, usage, and return format. No gaps are evident.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a description for the single parameter 'rxcui'. The tool description does not add additional semantic meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a clear verb+resource: 'Get active ingredients for a drug by RxCUI.' It then lists specific use cases that differentiate it from sibling tools like rxnorm_concept or rxnorm_classes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Use this tool to:' followed by three bullet points indicating when to use it. It does not explicitly exclude situations, but the use cases are specific enough to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rxnorm_ndcARead-onlyIdempotentInspect
Map between RxNorm concepts and National Drug Codes (NDC).
Use this tool to:
Get all NDC codes for a drug (by RxCUI)
Find the RxCUI for an NDC code
Cross-reference between coding systems
Provide either an RxCUI to get NDCs, or an NDC to get the RxCUI.
| Name | Required | Description | Default |
|---|---|---|---|
| ndc | No | NDC code to look up RxCUI (alternative to rxcui) | |
| rxcui | No | RxCUI to get NDC codes for |
Output Schema
| Name | Required | Description |
|---|---|---|
| ndc | Yes | |
| ndcs | Yes | |
| rxcui | Yes | |
| query_mode | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly=true, idempotent=true, openWorld=true, and destructiveHint=false. The description confirms the mapping behavior but adds no new behavioral traits beyond what annotations provide. With rich annotations, the description's added value is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at three sentences, uses bullet points for clarity, and front-loads the purpose. Every sentence adds essential information with no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich annotations, 100% schema coverage, and the presence of an output schema, the description is fully complete. It covers both use cases and input requirements without any gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description adds value by clarifying the mutual exclusivity (provide either RxCUI or NDC) and the direction of mapping. This enhances understanding beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool maps between RxNorm concepts and NDC codes. It lists specific actions: get all NDC codes for a drug by RxCUI, find RxCUI for an NDC, and cross-reference. This distinguishes it from sibling tools like rxnorm_search (which searches by name) and rxnorm_concept (which gets concept details).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly outlines when to use the tool (mapping between RxNorm and NDC) and what input to provide (either RxCUI or NDC). It does not explicitly mention when not to use it or name alternatives, but the context of sibling tools makes it clear. The guidance is nearly complete.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rxnorm_searchARead-onlyIdempotentInspect
Search for drugs in RxNorm (Normalized names for clinical drugs).
Use this tool to:
Find drug concepts by brand or generic name
Look up medications for prescribing
Search for drug formulations
Returns matching drugs with RxCUI identifiers, names, and term types.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Drug name to search (brand or generic) | |
| max_results | No | Maximum number of results (1-100). Default: 25 |
Output Schema
| Name | Required | Description |
|---|---|---|
| drugs | Yes | |
| query | Yes | |
| total_count | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, destructiveHint=false, making the non-destructive, idempotent nature clear. The description adds value by specifying return fields (RxCUI identifiers, names, term types) and search behavior, which is fully consistent with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with a single introductory line and a bulleted list of use cases. Every sentence is purposeful and adds value without redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has rich annotations and an output schema (indicated by 'Has output schema: true'). The description covers the tool's purpose, input, and output (RxCUI identifiers, names, term types) sufficiently for an agent to understand its behavior and integrate it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so parameters (query, max_results) are already well-documented. The description does not add new semantic information about parameters beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Search for drugs in RxNorm' and lists specific use cases (find drug concepts, look up medications, search formulations). It clearly differentiates from sibling tools like rxnorm_concept or rxnorm_classes which focus on individual concepts or classes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear use cases ('Use this tool to: ...') but does not explicitly mention when not to use it or point to alternatives. However, given the sibling tool list, the guidance is sufficient for an agent to infer appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.