Nih Reporter
Server Details
NIH RePORTER MCP — every NIH-funded research project (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-nih-reporter
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 13 of 13 tools scored. Lowest: 3.3/5.
Most tools have clearly distinct purposes: NIH-specific tools like get_project and search_grants are separate from general platform tools. However, ask_pipeworx overlaps as a catch-all, and entity_profile/compare_entities have similar scope but are differentiated by description.
Naming mix: some use verb_noun (get_project, search_grants), others use noun_noun (entity_profile, pipeworx_feedback) or single verbs (forget, recall). This inconsistency may cause confusion about tool roles.
13 tools is within the typical range (3-15), but the set includes many general-purpose platform tools not directly tied to NIH reporting, making the server feel slightly over-scoped for its stated purpose.
For an NIH reporter, the core operations (search and retrieve grants and publications) are covered. Minor gaps: no bulk retrieval by PI/institution and no write capabilities, but the general tools provide supplementary functionality.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior fully. It explains that the tool automatically selects the best tool and fills arguments, but does not detail limitations, failure modes, or whether the operation is read-only. It is somewhat transparent but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: three sentences and a set of examples. Every sentence adds value, and the purpose is stated upfront. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description is sufficiently complete. It covers what the tool does, how to use it, and gives examples. However, it could mention the return format or that it returns text answers.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the only parameter fully (100% coverage). The description adds value by specifying 'plain English' and providing concrete examples, which clarifies the expected input format beyond the schema's 'natural language' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It uses a specific verb ('ask') and resource ('question'), and distinguishes itself from siblings by positioning itself as a general question-answering interface, unlike specific tools like compare_entities or search_publications.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool should be used for any natural language question ('No need to browse tools or learn schemas — just describe what you need'), but it does not explicitly state when not to use it or provide alternatives from sibling tools. Examples help but exclusions are missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses data sources and return type (resource URIs), but does not explicitly state read-only nature or other behavioral traits like error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no redundancy. First sentence captures core action, second two add detail. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description vaguely mentions 'paired data + resource URIs' but lacks specifics on return structure, error conditions, or edge cases. More detail would make it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds significant value by listing specific data points returned per type (e.g., revenue, net income for companies; adverse event counts for drugs), going beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it compares 2-5 entities side by side, specifying distinct data fields for 'company' and 'drug' types. It differentiates from sibling tools like search_grants and search_publications which perform different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explains when to use: 'Replaces 8–15 sequential agent calls' implies efficiency for multi-entity comparisons. However, it does not explicitly mention when not to use or provide alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains the tool returns tool names and descriptions via search, but lacks details on behavior like query indexing, real-time updates, or rate limits. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, then usage guidance. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description explains return type (names and descriptions). Provides context about when it's useful (500+ tools). Lacks details on search algorithm or scope, but sufficient for a simple search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds minimal value beyond schema: 'describing what you need' paraphrases the query description. No additional parameter information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', and specifies that it returns 'most relevant tools with names and descriptions'. It distinguishes itself from sibling tools as a catalog search utility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to call this tool first when many tools are available, providing clear context for when to use it. Could mention when not to use it or alternative tools, but omissions are minor.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Details the data sources included (SEC filings, XBRL, patents, news, LEI) and that it returns pipeworx:// citation URIs. Mentions it replaces many calls, but no annotations provided so description carries burden. Lacks mention of rate limits or read-only nature, but overall informative.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose. Clear and concise, though the first sentence is a bit dense. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple data sources), the description fully covers what is returned, what data sources are used, what is excluded (federal contracts), and parameter usage guidance. No output schema exists, but mentions citation URIs. References sibling tool for name resolution.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant context: explains that type only supports 'company', that value can be ticker or CIK (with examples), and that names are not supported (recommending resolve_entity). Goes beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a full profile of an entity across multiple data sources (SEC, XBRL, USPTO, GDELT, GLEIF) in one call. Distinguishes from siblings like resolve_entity and compare_entities by specifying its comprehensive scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (for company profiles needing multiple data sources) and when not to use (for federal contracts, use usa_recipient_profile). Also advises to use resolve_entity first if only a name is available.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It states it deletes a memory but does not disclose if deletion is permanent, reversible, or requires special permissions. Minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence with no unnecessary words. Front-loaded with action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with one parameter and no output schema, description is adequate but lacks behavioral details like irreversibility or exact matching requirements. Could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter description 'Memory key to delete'. Description adds no extra meaning beyond the schema, consistent with baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb (delete), resource (stored memory), and mechanism (by key). It distinguishes from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'remember' or 'recall'. No mention of prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_projectAInspect
Fetch a single NIH grant record by application ID (numeric, distinct from project number). Returns full project details including complete abstract, PIs, terms, sub-projects, and award history.
| Name | Required | Description | Default |
|---|---|---|---|
| appl_id | Yes | NIH application ID (integer) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of behavioral disclosure. It correctly implies a read-only fetch by listing returns, but lacks details on error handling, permissions, or rate limits. Adequate for a simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the purpose and then lists the returned content. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the purpose and return content. It does not mention error behavior (e.g., missing ID), but it is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description. The description adds value by clarifying that appl_id is numeric and distinct from project number, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches a single NIH grant record by application ID, specifying it is numeric and distinct from project number. It lists returned fields (abstract, PIs, etc.), distinguishing it from siblings like search_grants which likely returns multiple records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a user has an application ID and wants a single grant record, but it does not explicitly state when to avoid this tool or mention alternatives like search_grants. The guidance is adequate but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must disclose behavioral traits. It transparently states the rate limit of 5 messages per day per identifier and notes the tool is free. It does not describe response behavior, but that is acceptable for a feedback tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with three sentences: one for purpose, one for guidance, and one for constraints. No unnecessary words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema or annotations, the description covers the tool's purpose, usage, and a key behavioral constraint (rate limit). It is complete enough for a simple feedback tool, though it could optionally mention what happens after submission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing a baseline of 3. The description adds value beyond the schema by instructing users to describe what they tried in terms of Pipeworx tools/data and to avoid including the end-user's prompt, giving meaningful context for the message parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: sending feedback to the Pipeworx team. It lists specific use cases (bug reports, feature requests, missing data, praise) which distinguishes it from sibling tools that are for querying or discovering data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (for various feedback types) and what to avoid (including the end-user's prompt verbatim). It also mentions rate limits. However, it does not explicitly compare to sibling tools or state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description does not disclose behavioral traits such as whether retrieval is read-only, side effects, or behavior when key is missing. More transparency is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundant information. Front-loaded with purpose and usage. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (one optional parameter, no output schema), description covers main use cases. Could mention behavior on missing key or empty list, but adequate overall.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and schema already documents the key parameter. Description adds minimal extra context about omitting key to list all, but does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all, with specific verb 'Retrieve' and resource 'stored memory'. It distinguishes from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: 'retrieve context you saved earlier'. Does not mention when not to use or alternatives, but siblings (remember, forget) make context clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the parallel fan-out to multiple sources, the accepted since formats (ISO and relative), and the output shape (structured changes + total_changes count + URIs). It does not discuss rate limits, authentication, or edge cases (e.g., no changes), but the transparency is still above average.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (5 sentences), front-loaded with the core purpose, then efficiently covers parameters, behavior, and return format. Every sentence adds necessary context; no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description covers input formats, parallel execution, and output structure. It does not mention error handling or limitations (e.g., only company type), but for typical usage it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds significant value: it provides concrete examples for since (ISO date and relative like '7d', '3m', '1y'), recommends typical values ('30d' or '1m' for monitoring), clarifies that value can be ticker or padded CIK, and explicitly limits type to 'company'. This goes well beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a clear verb and resource: 'What's new about an entity since a given point in time.' It then elaborates on the specific behavior for type 'company' (fanning out to SEC EDGAR, GDELT, USPTO) and provides explicit use cases ('brief me on what happened with X' or change-monitoring workflows), distinguishing it from siblings like entity_profile or compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool: for 'brief me on what happened with X' or change-monitoring workflows. It does not, however, mention when not to use it or provide explicit alternatives, though the sibling list and the specific fan-out behavior imply a niche. Slight lack of negative examples, but overall clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses persistence behavior for authenticated vs. anonymous users, but no annotations exist; could mention side effects, error handling, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, purpose stated first, zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully adequate for a simple key-value store tool; covers purpose, usage, and persistence without missing critical details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters; description adds value by suggesting example keys and values beyond raw schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Store a key-value pair in your session memory' with a specific verb and resource, clearly distinguishing it from sibling tools like recall and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete use cases ('save intermediate findings, user preferences, or context') and differentiates persistence based on auth status, but lacks explicit when-not-to-use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry all behavioral disclosure. It states the tool resolves entities and returns IDs and URIs, implying a non-destructive lookup, but it does not explicitly mention idempotency, rate limits, or authentication requirements. The description could be more transparent about side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, each contributing useful information: the primary purpose, supported types with examples, and the benefit over alternative approaches. It is front-loaded and avoids redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with two parameters and no output schema, the description adequately covers the core functionality and output format (IDs and resource URIs). It could benefit from mentioning the response structure or error cases, but it is sufficiently complete for an agent to understand the tool's role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema already has 100% coverage with descriptions for both parameters, the description adds practical examples (ticker, CIK, brand names) that clarify the expected formats beyond the schema's generic descriptions. This provides additional value for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves entities to canonical IDs across data sources in a single call, specifying the two entity types (company, drug) and the outputs. It distinguishes itself from other tools by noting it replaces 2-3 lookup calls, providing a clear purpose and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear context on when to use (resolving companies or drugs) and includes examples of input values. However, it does not explicitly mention when not to use the tool or provide comparisons to sibling tools like compare_entities or search_publications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_grantsAInspect
Search NIH-funded research projects. Filter by free-text query (matches title/abstract/terms), PI name, organization, fiscal year, US state, or NIH institute code (NCI, NHLBI, NIAID, etc.). Returns project number, PI, institution, fiscal year, award amount, and abstract preview.
| Name | Required | Description | Default |
|---|---|---|---|
| ic | No | NIH Institute/Center code (e.g., "NCI", "NHLBI", "NIMH") | |
| limit | No | Results per page (1-500, default 25) | |
| query | No | Free-text search across title, abstract, terms | |
| state | No | US state code (e.g., "CA") | |
| offset | No | Pagination offset (default 0) | |
| pi_name | No | PI last name (or any name part) | |
| fiscal_year | No | Fiscal year (e.g., 2024) | |
| organization | No | Institution name (e.g., "Stanford University") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations; description mentions return fields but lacks behavioral traits like pagination, rate limits, or read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences covering purpose and output with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers filters and output, but misses pagination details and filter combination logic; adequate for a search tool with 8 optional params.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds context beyond schema (e.g., free-text matches title/abstract/terms) and provides examples; schema coverage is 100% so baseline 3, with added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Search NIH-funded research projects' with specific filters and return fields. Distinct from siblings like search_publications.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this vs. alternatives, but filters are well-listed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_publicationsAInspect
Search publications acknowledging NIH funding. Filter by PMID, application ID, or core project number. Useful for "what came out of this grant" follow-ups.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Results per page (1-500, default 25) | |
| pmids | No | Comma-separated PubMed IDs | |
| offset | No | Pagination offset | |
| appl_ids | No | Comma-separated NIH application IDs | |
| core_project_nums | No | Comma-separated core project numbers (e.g., "R01CA123456") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden for behavioral disclosure. It describes a read-only search operation, which is fine, but it does not mention rate limits, response format, or sorting behavior. The description is adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with two sentences. The first sentence states the action and scope, the second provides a use case. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
There is no output schema, so the description should give a hint about return format, but it does not. The parameters are well-documented, but for a search tool, the description is incomplete regarding what fields are returned and how results are paginated. It covers basic usage but lacks full contextual detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the baseline is 3. The description does not add new meaning beyond what the parameter descriptions already provide. It restates filter options but does not enhance semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for publications acknowledging NIH funding, with specific filter options (PMID, application ID, core project number). It is distinct from sibling tools like search_grants and get_project, which focus on grants and projects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case: 'Useful for "what came out of this grant" follow-ups.' This implies when to use it, but does not explicitly state when not to use or mention alternative tools. This is good but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully discloses the tool's behavior: it returns a verdict, structured form, actual value with citation, and percent delta. It also reveals the underlying data sources and mentions it is a composite tool combining multiple steps. No destructive side effects are implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, each providing essential information: what the tool does, what it supports, and what it returns. No word is wasted, and the structure is front-loaded with the key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and no output schema, the description is remarkably complete. It explains the supported domain, the process (comparing against authoritative sources), the output structure, and the benefit over alternative approaches. All necessary context for an agent to use the tool is present.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'claim' is described in the schema with an example, and the tool description reinforces that by framing it as a natural-language factual claim. The schema description coverage is 100%, meeting the baseline for a score of 3; the contextual example justifies moving to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action (fact-check), target (natural-language claim), and authoritative sources (SEC EDGAR + XBRL). It specifies the domain (company-financial claims for US public companies) and explicitly distinguishes itself from alternatives by mentioning it replaces 4-6 sequential agent calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates the tool is meant for company-financial claims and lists the types of metrics supported (revenue, net income, cash). While it doesn't explicitly state when not to use it, the 'v1 supports' phrasing implies limitations to that domain. It provides clear context for when to use it by contrasting with the multi-step alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!