Skip to main content
Glama

Server Details

RxNorm MCP — wraps the NLM RxNav REST API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-rxnorm
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.8/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

The tools are mostly distinct, but ask_pipeworx and discover_tools overlap in purpose (both help find tools or answer questions). The rxnorm tools are clearly distinct from the memory tools, but the Pipeworx utilities blur boundaries.

Naming Consistency3/5

The rxnorm tools follow a consistent rxnorm_verb pattern, but the Pipeworx tools (ask_pipeworx, discover_tools) use a different convention (imperative phrases). Memory tools (forget, recall, remember) are simple verbs, creating three distinct naming styles.

Tool Count4/5

10 tools is appropriate for a server that combines a domain-specific RxNorm API with general-purpose Pipeworx and memory utilities. The count is well-scoped without feeling excessive or thin.

Completeness3/5

The RxNorm tools cover search, properties, NDC, interactions, and related concepts, but lack functionality for specific drug details like strength or dose form. The memory tools are minimal (CRUD but no search). The overall set feels like a patchwork rather than a coherent domain.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavior. It explains it picks the right tool and fills arguments, which is good transparency. However, it doesn't mention limits like data source coverage or latency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise at 4 sentences, front-loaded with purpose. Every sentence adds value, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, no output schema), the description is nearly complete. It could mention that results are returned as text, but not essential. The examples provide strong context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the single parameter 'question' is described as natural language. The description adds value by explaining how the parameter is used (to select tools and fill arguments) with examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it answers plain English questions using the best available data source, with specific examples. It distinguishes itself from sibling tools by acting as a universal question-answering interface.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says no need to browse tools or learn schemas, and provides example questions. It implies this is the go-to for natural language queries, while siblings like rxnorm_search are more specific.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains it returns 'most relevant tools with names and descriptions' and hints at search behavior. It doesn't describe auth needs or rate limits, but the tool is a simple search, so transparency is good. Could mention whether it supports pagination or fuzzy matching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each earned: first says what it does, second specifies output, third gives when-to-use advice. No fluff, front-loaded with action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention the structure of returned results (e.g., relevance scores or tool details). But it's a discovery tool, and the purpose is clear. The description is sufficient for the simple task of tool discovery.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% so baseline is 3. The description adds context by saying 'Natural language description' for query and provides example queries, which is helpful. For limit, it repeats defaults but adds 'max 50' which may not be in schema (schema says 'default 20, max 50' – actually schema includes that, so no extra value). Still, the examples add value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches a tool catalog by description, returns relevant tools with names and descriptions, and positions it as a discovery tool. The verb 'search' and resource 'tool catalog' are specific and it distinguishes from siblings like ask_pipeworx (likely conversational) and rxnorm_* (domain-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance and implies it's for tool selection, not for answering queries directly, differentiating it from ask_pipeworx.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description only states 'Delete a stored memory by key.' Does not disclose irreversibility, authorization needs, or side effects beyond deletion.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, direct, no wasted words. Front-loaded with verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has a single required parameter and no output schema, the description is minimal but could still benefit from clarifying that deletion is permanent or that it returns a confirmation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single 'key' parameter, so description adds no extra meaning beyond schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action (Delete), resource (stored memory), and method (by key). Differentiates from sibling tools like recall and remember.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs alternatives (e.g., remember for storing, recall for retrieving). No prerequisites or context provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool can retrieve by key or list all, which is sufficient. However, it doesn't mention if the operation is read-only or if it has side effects (e.g., no deletion). Given the lack of annotations, a score of 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, clear and front-loaded. The first sentence explains what it does, the second provides usage context. It is concise without missing necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter, no output schema), the description is fairly complete. It explains retrieval and listing behavior. However, it doesn't mention whether the tool is persistent across sessions or if it returns a list of keys vs. full memory content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (one parameter 'key' fully described). The description adds context that omitting the key lists all memories, which clarifies behavior beyond the schema's basic description. Baseline 3 is correct.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories, with a specific verb ('Retrieve') and resource ('stored memory'). It distinguishes itself from sibling tools like 'remember' and 'forget' by being the retrieval counterpart.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool ('Retrieve context you saved earlier'), and implies when not to use it (omit key for listing). It doesn't explicitly mention alternatives, but given the sibling tools, the usage is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral details: persistence depends on authentication (authenticated users get persistent memory, anonymous sessions last 24 hours). However, it does not mention any side effects, limits (e.g., memory size), or overwrite behavior for existing keys.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences: purpose, usage, and behavioral detail. No wasted words, and the most critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (2 simple params, no output schema) and lack of annotations, the description is mostly complete. It covers purpose, usage context, and persistence behavior. Minor gaps: no mention of overwrite behavior or memory size limits, but these are not critical for a simple key-value store.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema descriptions are already detailed (e.g., example keys like 'subject_property', 'target_ticker'). The description adds a general purpose but does not add meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, with specific use cases like saving intermediate findings, user preferences, or context across tool calls. The verb 'store' and resource 'session memory' are precise, and it distinguishes from siblings like 'forget' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (to save context across calls) and differentiates use cases (authenticated vs anonymous sessions). However, it does not explicitly state when not to use it or mention alternative tools like 'recall' for retrieval, though the distinction is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rxnorm_get_propertiesCInspect

Get full medication details by RxCUI ID. Returns name, synonyms, term type, language, and status. Use after rxnorm_search to retrieve complete drug information.

ParametersJSON Schema
NameRequiredDescriptionDefault
rxcuiYesRxNorm concept ID (e.g., "7052" for metformin)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should fully disclose behavior. It doesn't mention that this is a read-only operation, whether the suppress flag indicates hidden data, or if the tool handles invalid RxCUIs gracefully. The description adds minimal behavioral context beyond what the input schema already provides.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences; front-loaded with purpose and lists return fields efficiently. No fluff, but the second sentence could be integrated into the first without loss.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single required parameter, no output schema), the description is mostly adequate but lacks usage context (e.g., prerequisite of knowing RxCUI) and does not mention any limitations or error handling. It meets minimal completeness but could better support the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds value by listing returned fields (name, synonym, etc.) but does not explain the rxcui parameter's format or provide examples beyond the schema's own example '7052'. Baseline 3 is appropriate as schema already describes the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets properties for a drug by its RxCUI, listing specific return fields (name, synonym, term type, language, suppress flag). It distinguishes itself from siblings like rxnorm_search (search) and rxnorm_related (related concepts) by focusing on a single concept ID lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. It doesn't mention that the user must first know the RxCUI (e.g., from rxnorm_search), nor does it contrast with rxnorm_interactions or rxnorm_ndc which serve different purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rxnorm_interactionsAInspect

Check drug-drug interactions by RxCUI. Returns interaction pairs and severity levels. Use to identify safety risks when combining medications.

ParametersJSON Schema
NameRequiredDescriptionDefault
rxcuiYesRxNorm concept ID of the drug to check interactions for
sourcesNoOptional interaction source filter: "DrugBank", "ONCHigh", or omit for all sources
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly states the tool may return errors due to retirement, which is critical for the agent. It also mentions the optional source filter. However, it doesn't describe what the tool returns on success or failure beyond potential errors, which is a minor gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first clearly states the purpose, and the second provides critical usage guidance. Every sentence is necessary and front-loaded. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (two string parameters, no output schema), the description adequately covers the tool's purpose, the retirement warning, and the source filter. It doesn't explain return values or error handling beyond potential errors, but for a tool with no output schema and a clear warning, this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both parameters have descriptions), so the baseline is 3. The description does not add any additional meaning beyond what the schema already provides (e.g., it doesn't clarify the format of the RxCUI or the effect of the source filter).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks drug-drug interactions for a given RxCUI, distinguishing it from sibling tools like rxnorm_get_properties or rxnorm_search which have different purposes. The verb 'Check' and resource 'drug-drug interactions' are specific, though the retired status note may slightly distract from the core purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly warns that the NIH retired this API in January 2024, advising against use and directing to alternative tools (PubMed or drug label lookups). This provides clear when-not-to-use guidance and suggests appropriate alternatives, fully satisfying the dimension.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rxnorm_ndcAInspect

Get NDC (National Drug Code) identifiers by RxCUI. Returns unique US pharmacy codes for billing and fulfillment. Use for prescription processing or inventory lookup.

ParametersJSON Schema
NameRequiredDescriptionDefault
rxcuiYesRxNorm concept ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It discloses that the tool gets NDC identifiers, but no behavioral traits like rate limits, authentication needs, or data freshness are mentioned. Adequate for a simple lookup tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and resource, no wasted words. Could be slightly more structured but effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 param, no output schema), description is mostly adequate. However, lacks details on return format (e.g., list or object) and potential errors. Complete enough for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter with description). The description adds meaning by explaining what the output is (NDC codes) but doesn't elaborate on the input parameter beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb (Get), resource (NDC identifiers), and target (drug by RxCUI), distinguishing it from sibling tools like rxnorm_get_properties or rxnorm_search. It also explains what NDC codes are, adding context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage: when you need NDC codes for a known RxCUI. No explicit when-not-to-use or alternatives mentioned. Sibling tools exist for properties, interactions, etc., but no comparison provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.