Samgov
Server Details
SAM.gov MCP — Federal contract opportunities and entity registration data
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-samgov
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 9 of 9 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose: memory tools (remember/recall/forget) are separate from SAM.gov entity and opportunity search tools, and the pipeworx meta-tools (ask_pipeworx, discover_tools) serve unique roles. There is no overlap or confusion between them.
Tools follow a mostly consistent verb_noun pattern (e.g., sam_entity_search, sam_search_opportunities, sam_get_opportunity). However, ask_pipeworx and discover_tools break the pattern slightly, and sam_set_aside_opportunities uses 'set_aside' as an adjective rather than a verb. Overall, the naming is clear and predictable.
With 9 tools, the count is well-scoped for the server's purpose: 3 memory tools, 4 SAM.gov tools, and 2 pipeworx meta-tools. Each tool earns its place, covering distinct functionalities without bloat.
The SAM.gov tools cover entity search, opportunity search (with set-aside filter), and full opportunity details, which forms a solid core. However, missing features like entity detail retrieval (beyond search) or opportunity updates are minor gaps. The memory tools are complete for simple key-value storage, and pipeworx meta-tools enable discovery and natural language querying.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool automatically selects the best data source and fills arguments, which is important behavioral information beyond the input schema. No annotations are provided, so the description carries full burden, and it does so well, though it could mention potential limitations or error cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the core purpose, followed by examples. It avoids unnecessary details, though the examples are helpful but slightly verbose. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input (single string) and no output schema, the description adequately explains usage and behavior. It provides examples and sets expectations. Could be slightly more complete by mentioning that results may vary based on data source availability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a clear parameter description. The description adds context by explaining the parameter's purpose in natural language, but since schema coverage is high, the additional value is moderate. No contradictions or omissions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts a plain English question and returns an answer from the best data source. It distinguishes itself from other tools by abstracting away the need to browse or select specific tools, emphasizing a single natural language interface.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises users to simply describe what they need and provides concrete examples, effectively guiding usage. However, it does not explicitly mention when not to use this tool or alternative tools, which would be helpful given the presence of specialized siblings like sam_entity_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions that the tool 'Returns the most relevant tools with names and descriptions', which is a useful behavioral trait. Since no annotations are provided, the description carries full burden for transparency. It could be improved by noting that the search is based on natural language and the number of results can be limited via the 'limit' parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, with the first sentence clearly stating the action, the second explaining the output, and the third providing usage guidance. Every sentence is valuable and there is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has a simple purpose (search a catalog) and the input schema is fully documented, the description covers everything an agent needs: what it does, how to use it, and when to use it. The lack of output schema is acceptable because the description states what is returned (tool names and descriptions).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, meaning both parameters (query and limit) have descriptions. The description does not add new parameter meaning beyond what the schema provides, so baseline 3 is appropriate. The description's examples of queries (e.g., 'analyze housing market trends') reinforce the schema's examples but don't add extra semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching a tool catalog by describing a need, and returning relevant tool names and descriptions. It also includes a specific use case ('Call this FIRST') which distinguishes it from sibling tools like 'ask_pipeworx' or 'sam_search_opportunities'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', providing clear guidance on when to use the tool. It implies that this tool is for discovery, not for executing tasks, which differentiates it from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavioral traits. It states the action is deletion but does not mention irreversibility, confirmation, or side effects. For a destructive operation, more transparency is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded verb, no wasted words. Efficiently conveys purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple deletion tool with no output schema and no annotations, description is minimal. Lacks behavioral warnings (irreversible, permission requirements). Could mention that memory is permanently removed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (1 param with description). Description adds no additional meaning beyond schema; baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it deletes a stored memory by key, specifying the verb (delete), resource (stored memory), and parameter (key). Distinguishes from sibling tools like 'remember' (create) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Implies deletion when memory should be removed, but doesn't specify conditions or warnings (e.g., irreversible action).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full burden. It discloses that omitting key lists all memories and that memories persist across sessions. It does not mention performance, but for a simple key-value retrieval, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no wasted words. The purpose and usage are front-loaded, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 optional param, no output schema), the description is complete. It explains both modes and the persistence context. The only minor gap is that it doesn't describe the format of the returned memory, but for a simple retrieval tool, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining that omitting the key lists all memories, which is not obvious from the schema alone. This clarifies the behavior of the optional parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Retrieve' and the resource 'stored memory', with two distinct modes: retrieval by key or listing all memories. It distinguishes itself from sibling tools like 'remember' (store) and 'forget' (delete) by focusing on retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool ('to retrieve context you saved earlier') and implicitly differentiates from 'remember' and 'forget' by being the retrieval counterpart. It could explicitly mention not to use it for storing or deleting, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses persistence behavior: authenticated users get persistent memory, anonymous sessions last 24 hours. However, it doesn't mention any limits (e.g., max keys, size), side effects, or if overwriting is allowed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences covering purpose, use case, and persistence details. No wasted words, front-loaded with core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value store with full schema coverage and no output schema, the description is sufficiently complete. It explains the value of use and persistence, though missing constraints like key format or size limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the purpose of saving findings and preferences, and the key examples in the schema are descriptive. The description reinforces usage context, justifying a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, specifying verb 'store', resource 'key-value pair', and context 'session memory'. It distinguishes from siblings like 'forget' and 'recall' by its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use this tool: to save intermediate findings, user preferences, or context across tool calls. It does not explicitly exclude scenarios or mention alternatives, but the purpose is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sam_entity_searchAInspect
Search registered federal contractors by business name or UEI. Returns UEI, CAGE code, address, NAICS codes, small business status, and certifications.
| Name | Required | Description | Default |
|---|---|---|---|
| naics | No | Filter by primary NAICS code (optional) | |
| state | No | Filter by 2-letter US state code (e.g., "VA", "CA") | |
| _apiKey | Yes | SAM.gov API key | |
| business_name | Yes | Legal business name to search for | |
| small_business | No | Filter to only small business entities (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full burden. It discloses the search scope (registered entities), data returned, and optional filters. Lacks details on API key handling or rate limits, but is sufficient for safe usage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently conveys purpose, scope, and output. No redundancy or unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists returned fields. The tool has 5 parameters but schema covers all; description adds context for optional filters. Some behavioral details (pagination, result limits) missing but acceptable for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for each parameter. The description adds context by listing return fields and optional filters, which enhances meaning beyond the schema. No additional param details are necessary.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (search), the target (registered entities in SAM.gov), and lists specific data returned (UEI, CAGE code, etc.). It distinguishes from sibling tools by focusing on entity search rather than opportunities or memory operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when needing vendor/contractor data, and sibling tool names suggest alternatives for opportunities. However, no explicit when-not-to-use or alternative comparison is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sam_get_opportunityAInspect
Get full details for a federal contract opportunity by solicitation number. Returns description, contact info, deadlines, attachments, NAICS codes, and set-aside status.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | SAM.gov API key | |
| solicitation_number | Yes | The solicitation number to look up (e.g., "W912DY-24-R-0001") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It implies a read-only operation (getting details) without stating side effects, permissions, or error handling. The description adds context on returned data but does not disclose behaviors like potential API limits or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the key purpose and lists the returned data. Every word is necessary, and it avoids redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input (2 parameters, no nested objects, no output schema), the description is largely sufficient. It specifies the returned data fields, which is helpful. However, it could mention that the output may be empty if the solicitation number is invalid, but overall completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes both parameters with 100% coverage. The description does not add additional meaning beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full details for a federal contract opportunity using a solicitation number, listing specific data fields (point of contact, attachments, classification, full description). The verb 'Get' and resource 'opportunity' are specific, and the tool's purpose is distinct from sibling tools like sam_search_opportunities (search) and sam_set_aside_opportunities (set-aside).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it (to get full details by solicitation number) but does not mention when not to use it or compare to alternatives. Sibling tools like sam_search_opportunities suggest a different use case, but the description lacks explicit exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sam_search_opportunitiesAInspect
Search active federal contract opportunities by keyword, NAICS code (e.g., "541512"), set-aside type, posting date range, and procurement type. Returns titles, solicitation numbers, deadlines, and agencies.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-100, default 10) | |
| naics | No | NAICS code to filter by (e.g., "541512" for computer systems design) | |
| ptype | No | Procurement type filter: p (presolicitation), o (solicitation), k (combined synopsis/solicitation), a (award notice) | |
| offset | No | Result offset for pagination (default 0) | |
| _apiKey | Yes | SAM.gov API key | |
| keyword | Yes | Search term for opportunity title or description | |
| posted_to | No | End of posting date range in MM/dd/yyyy format | |
| set_aside | No | Small business set-aside type: SBA (Small Business), SDVOSB (Service-Disabled Veteran), HUBZone, 8AN (8(a)), WOSB (Women-Owned), EDWOSB (Economically Disadvantaged Women-Owned) | |
| posted_from | No | Start of posting date range in MM/dd/yyyy format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It clearly describes the tool as a search operation (non-destructive) and enumerates filter capabilities, which is sufficient for behavioral transparency. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the main purpose and lists filters concisely. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (9 parameters, no output schema), the description adequately covers the search functionality and filters. It does not explain return values, but output schema is absent so that is a minor gap. It is sufficient for an agent to understand when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description lists filters (keyword, NAICS code, etc.) but does not add meaning beyond what the schema already provides for each parameter. It does not explain return format or pagination details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches active federal contract opportunities on SAM.gov and lists specific filters (keyword, NAICS code, set-aside type, posting date range, procurement type). It distinguishes itself from sibling tools like sam_entity_search (entity search) and sam_get_opportunity (single opportunity retrieval).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching opportunities but does not explicitly state when to use this tool vs alternatives like sam_get_opportunity or sam_set_aside_opportunities. There is no guidance on when not to use it or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sam_set_aside_opportunitiesAInspect
Find federal contracts reserved for small businesses (women-owned, HUBZone, service-disabled veteran-owned, etc.). Returns titles, deadlines, and agencies.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (1-100, default 10) | |
| naics | No | Optional NAICS code filter | |
| _apiKey | Yes | SAM.gov API key | |
| keyword | No | Optional keyword to narrow results | |
| set_aside | Yes | Set-aside type (required): SBA (Small Business), SDVOSB (Service-Disabled Veteran), HUBZone, 8AN (8(a)), WOSB (Women-Owned), EDWOSB (Economically Disadvantaged Women-Owned) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It correctly states it's a search/filter operation, implying read-only behavior. It doesn't mention any side effects or access requirements beyond the API key. No contradictions are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loads the key action and resource, and each sentence adds value. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema), the description is adequate but could mention what the results contain (e.g., list of opportunity IDs) or how pagination works. The set_aside parameter's values are documented in the schema, which is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description adds no additional parameter information beyond what the schema already provides. The description's mention of 'small business set-aside type' aligns with the set_aside parameter but doesn't add new semantics. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool searches federal contract opportunities filtered by small business set-aside type. It specifies the purpose (finding reserved opportunities) and the resource (federal contract opportunities), which clearly distinguishes it from sibling tools like sam_search_opportunities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes the tool is useful for finding reserved opportunities, implying a use case. However, it does not provide explicit guidance on when not to use it or how it differs from sam_search_opportunities, which also searches opportunities. No alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!