Skip to main content
Glama

Server Details

SBIR MCP — wraps the SBIR.gov public API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-sbir
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation3/5

The ask_pipeworx tool overlaps significantly with the sbir_* tools, as it can answer SBIR questions by invoking the appropriate tool. This creates ambiguity about when to use ask_pipeworx vs. specific sbir tools. The memory tools (forget, recall, remember) and discover_tools are distinct.

Naming Consistency3/5

SBIR tools follow a consistent 'sbir_<operation>_<target>' pattern (e.g., sbir_search_awards). However, ask_pipeworx, discover_tools, and the memory tools break this pattern with no clear convention.

Tool Count4/5

10 tools is reasonable for a server focused on SBIR data with auxiliary memory and discovery utilities. The count is slightly padded by the general-purpose memory tools, but still within appropriate range.

Completeness4/5

The SBIR data tools cover searching awards, solicitations, company awards, agency stats, and individual award details. Missing are create/update/delete operations, but for a read-only government data source, this is complete. The memory tools add session persistence.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It mentions the tool 'picks the right tool, fills the arguments, and returns the result', indicating it automates tool selection and parameter filling. However, it does not disclose any limitations, data freshness, or potential errors. The description is somewhat vague about what happens behind the scenes, earning a moderate score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences plus examples, front-loading the core action. Each sentence adds value: what it does, how it works, and examples. The examples are helpful but add length; still, it remains well-structured and avoids redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description is largely complete. It explains the purpose, behavior, and provides examples. It could mention that results may vary by data source, but overall it covers the essential context for an agent to decide to use this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with only one parameter ('question'). The description adds meaning by explaining that the question should be in natural language and gives examples, which goes beyond the schema's 'Your question or request in natural language'. However, since the schema already provides a clear description, the additional value is modest, yielding a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool takes a natural language question and returns an answer, using 'Ask a question' and 'get an answer'. It distinguishes itself from siblings by emphasizing it automatically selects the best data source and fills arguments, which no other sibling tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'just describe what you need' and gives three examples. It does not explicitly state when not to use it or mention alternatives, but the context of being a general question-answering tool with specific examples implies it's for broad queries. The lack of negative examples or comparisons to siblings slightly reduces the score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that the tool returns 'most relevant tools with names and descriptions', which is clear. However, it does not mention if it's read-only or if there are side effects; given the search nature, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each with clear purpose: first states what it does, second describes output, third gives usage guidance. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 simple parameters, no output schema, and no annotations, the description is complete. It explains input (natural language query, limit), output (tool names and descriptions), and when to use (first step). No missing critical information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds value by explaining the query parameter should be a 'Natural language description' with examples, and notes default/max for limit. This enhances the schema's minimal descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', specifying the action is to find tools by describing needs. It distinguishes from siblings by indicating this is the first call when 500+ tools are available, while siblings like sbir_search_awards have a different domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Call this FIRST' and provides context for when to use: when 500+ tools are available and need to find the right ones. Implicitly suggests alternatives are other tools that perform specific tasks after discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden for behavioral disclosure. It states the action is destructive ('Delete') but does not specify if the deletion is irreversible, whether it requires confirmation, or any side effects. It is adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that clearly conveys the purpose. No redundant words or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (one required parameter, no output schema, no nested objects), the description is complete enough for an agent to understand its function. However, it could hint at whether the operation is idempotent or if the key must exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already provides a description for the single parameter 'key', achieving 100% coverage. The tool description does not add extra meaning beyond 'Memory key to delete', but the schema alone is sufficient. Score is elevated because schema coverage is high and parameter is simple.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the resource ('a stored memory by key'), matching the tool's name 'forget'. It is specific and distinguishes from siblings like 'recall' and 'remember' which are for retrieval and storage respectively.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool: when you need to delete a specific memory identified by its key. It does not explicitly state when not to use it or name alternatives, but the sibling tool names (recall, remember) provide implicit context for differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description carries full burden. It clearly states the tool is for retrieving previously stored memory, implying a read-only operation. It also clarifies that omitting key lists all keys, which is a behavioral detail not in the schema. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise (two sentences), front-loaded with the primary action, and adds a usage hint in the second sentence. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple tool with one optional parameter and no output schema, the description is sufficient. It explains the two modes of operation and the context for use. No additional details about return format are needed as there is no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description adds value by explaining that omitting the key lists all memories, which is not explicit in the schema. This extra semantic helps the agent understand the optional nature of the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Retrieve' and resource 'stored memory by key', and also explains the alternate behavior of listing all memories when key is omitted. This distinguishes it from sibling tools like 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly says when to use ('retrieve context you saved earlier') and implies when to omit key (to list all). However, it does not mention when not to use this tool or suggest alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description discloses important behavioral traits: memory persistence based on authentication and 24-hour expiration for anonymous sessions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a distinct purpose: what it does, when to use, and behavioral nuances. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (two required params, no output schema, no nested objects), the description is complete. It covers purpose, usage, and key behavioral traits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description adds little beyond examples. It mentions storing text but does not elaborate on format constraints beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it stores a key-value pair in session memory, and distinguishes itself from siblings like recall and forget by mentioning persistence behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides guidance on when to use (save intermediate findings, user preferences, context) and distinguishes persistence between authenticated and anonymous sessions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sbir_agency_statsAInspect

Get SBIR/STTR award counts by agency. Specify agency (e.g., "DOD", "NASA", "NSF") or omit to see all major agencies.

ParametersJSON Schema
NameRequiredDescriptionDefault
agencyNoSpecific agency to get count for (e.g., "DOD", "NASA"). Omit to get counts for all major agencies.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the tool's behavior: it returns counts for the specified agency or all major agencies. It lists the major agencies (DOD, HHS, NASA, NSF, DOE, USDA). There are no annotations provided, so the description carries the full burden. It does not mention response format, data freshness, or any rate limits, but given the simplicity of the tool, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the purpose and then providing conditional behavior. Every sentence adds value with no fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple counting tool with a single optional parameter and no output schema, the description is largely complete. It specifies what data is returned (counts) and the agencies covered. A minor gap: it does not explain if the count includes both SBIR and STTR or if they are separate, but this is likely self-evident given the tool name.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the only parameter 'agency' has a description). The description adds context by explaining the effect of omitting the parameter (returns all major agencies) and providing examples. This adds value beyond the schema alone, but does not go into detailed formatting or validation rules. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving SBIR/STTR award counts by agency. It specifies the verb ('Get'), the resource ('SBIR/STTR award counts by agency'), and distinguishes two modes: specific agency or all major agencies. This differentiates it from sibling tools that deal with company awards, specific awards, or solicitation searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: for award counts by agency. It explains that if an agency is specified, returns that agency's count; otherwise returns counts for all major agencies. However, it does not explicitly state when not to use it or mention alternatives among sibling tools, such as sbir_search_awards for detailed award data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sbir_company_awardsBInspect

Get complete SBIR/STTR award history for a company. Returns all awards with amounts, agencies, topics, and funding phases.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default 50)
companyYesCompany name to search for
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states returns a full list, which is helpful. However, it does not mention authentication requirements, rate limits, or whether results are paginated. The 'limit' parameter hints at pagination but is not described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, and provides key return fields. It is concise but could be more efficient by merging the two sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 params and no output schema, the description is adequate. It explains the main return type but lacks details on pagination behavior (e.g., does 'limit' affect the full list?) and any filtering capabilities. Given the context of siblings, a bit more guidance would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema: 'company' is self-explanatory, and 'limit' is described in the schema as 'Number of results to return (default 50)'. No additional semantics provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'SBIR/STTR awards for a specific company', and lists the fields returned (amounts, agencies, topics, phases). It distinguishes from siblings like 'sbir_get_award' (single award) and 'sbir_search_awards' (search), though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need all awards for a known company. No explicit when-not or alternatives are given, but the context of sibling names suggests other tools for specific award retrieval or search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sbir_get_awardAInspect

Get full details for a specific SBIR/STTR award by ID. Returns company, award amount, agency, abstract, phase, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
award_idYesThe unique award ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It correctly indicates a read operation (returns info) but does not mention any side effects, permissions, or constraints. Since there are no annotations to contradict, it is adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose, input, and output content without any redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has a single parameter, no output schema, and no nested objects, the description provides enough context: what it does, what input is needed, and what output includes. It could mention if the award ID is a specific format or how errors are handled, but overall it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single parameter 'award_id' described as 'The unique award ID'. The description adds that the ID is for an SBIR/STTR award, which provides context beyond the schema. However, it does not add format or examples, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves details for a single award by ID, lists the data fields returned (company, amount, agency, abstract, phase), and distinguishes it from sibling tools like sbir_search_awards which would return multiple awards.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies the tool is for a single award and requires an award ID, implying it should be used when the specific ID is known. It implicitly distinguishes from sbir_search_awards but does not explicitly state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sbir_search_awardsAInspect

Search SBIR/STTR awards by keyword, agency (e.g., "DOD", "NASA"), year, company, or state. Returns company name, award amount, agency, topic, abstract, year, and phase.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNoFilter by award year (e.g., 2024)
limitNoNumber of results to return (default 20, max 100)
stateNoFilter by 2-letter US state code (e.g., "CA", "MA")
agencyNoFilter by funding agency (e.g., "DOD", "HHS", "NASA", "NSF", "DOE", "USDA")
companyNoFilter by company name
keywordYesSearch term to match against award titles, abstracts, and topics
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It discloses return fields but does not mention pagination, rate limits, or data freshness. The description is accurate but lacks behavioral details beyond input-output.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence listing key filters and return fields. It is concise and front-loaded, but could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters, no output schema, and empty annotations, the description provides a solid overview but lacks depth on pagination, sorting, or error behavior. It is adequate for simple use but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter has a description. The tool description adds context by listing return fields and summarizing filters, but does not add meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches SBIR/STTR awards and lists all filter dimensions (keyword, agency, year, company, state). It also specifies return fields, distinguishing it from siblings like sbir_get_award (single award) and sbir_company_awards (company-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for searching awards, but does not explicitly contrast with sibling tools like sbir_company_awards or sbir_search_solicitations. No guidance on when to use this vs. alternatives is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sbir_search_solicitationsBInspect

Find active SBIR/STTR funding opportunities by keyword or agency (e.g., "DOD", "NSF"). Returns topic descriptions, sponsoring agency, and open/close dates.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default 20)
agencyNoFilter by agency (e.g., "DOD", "HHS", "NASA", "NSF", "DOE", "USDA")
keywordYesSearch term to match against solicitation topics and descriptions
open_onlyNoOnly return currently open solicitations (default true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains the tool returns topics with description, agency, and dates, and that it searches against topics and descriptions. Since there are no annotations, the description carries the full burden. It does not mention pagination, sorting, or behavior when no results are found, which are relevant for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, concise and front-loaded with the tool's purpose. However, the first sentence could be more active (e.g., 'Search for SBIR/STTR solicitations') to be clearer.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters, no output schema, and no annotations, the description is somewhat sparse. It explains what the tool does but lacks detail on return format, pagination, or behavior. It is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add any parameter-level detail beyond what the schema provides. It lists fields returned but does not elaborate on parameters like open_only or agency values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches SBIR/STTR solicitations and lists the key fields returned (description, agency, dates). It distinguishes the tool from siblings like sbir_search_awards by specifying the resource type (solicitations vs awards). However, it could be more specific about the verb (e.g., 'search' vs 'list').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., sbir_search_awards for awards, sbir_agency_stats for statistics). It does not explain the relationship between solicitations and awards, which would help an agent choose correctly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.