Skip to main content
Glama

Server Details

Advice MCP — wraps Advice Slip API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-advice
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 8 of 8 tools scored. Lowest: 3.1/5.

Server CoherenceB
Disambiguation3/5

The tools have some distinct purposes, but there is notable overlap and ambiguity. For example, 'ask_pipeworx' and 'discover_tools' both involve finding tools or information, which could confuse an agent about when to use each. However, other tools like 'random_advice' and 'search_advice' are clearly differentiated for advice retrieval.

Naming Consistency2/5

Naming conventions are inconsistent across the tool set. Some tools use verb_noun patterns like 'get_advice' and 'search_advice', while others use single verbs like 'forget' and 'recall', and there are mixed styles like 'ask_pipeworx' (verb_proper_noun). This lack of a predictable pattern reduces coherence.

Tool Count4/5

With 8 tools, the count is reasonable and well-scoped for the server's purpose, which appears to combine advice retrieval and memory management. It's not excessive or too sparse, though it could be slightly optimized for better focus.

Completeness3/5

The tool surface has notable gaps. For advice retrieval, it covers random and search operations but lacks explicit create, update, or delete tools for advice slips. The memory tools (remember, recall, forget) provide basic CRUD, but the overall domain is split between advice and memory, making coverage incomplete for a unified purpose.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by explaining key behavioral traits: it describes the automated tool selection process ('Pipeworx picks the right tool, fills the arguments'), mentions the natural language interface, and provides concrete examples of what types of questions work. It doesn't mention rate limits, authentication needs, or error handling, but covers the core operational behavior adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly front-loaded with the core functionality in the first sentence, followed by explanatory context and concrete examples. Every sentence earns its place by either explaining the tool's value proposition or providing actionable usage guidance. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations and no output schema, the description provides excellent context about how the tool works, when to use it, and what to expect. The examples effectively illustrate the range of possible queries. It doesn't describe the format of returned answers, but given the tool's purpose as a natural language interface, this is reasonable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a single parameter, so baseline would be 3. The description adds significant value by explaining the parameter's purpose in context ('Ask a question in plain English'), providing multiple examples of valid inputs, and emphasizing the natural language approach. This goes well beyond what the schema's basic description provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Ask a question', 'get an answer') and resource ('from the best available data source'). It distinguishes from siblings by emphasizing natural language processing and automation of tool selection/argument filling, unlike other tools that appear to involve advice/search/recall functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('No need to browse tools or learn schemas — just describe what you need') and provides clear examples of appropriate use cases. The description effectively positions this as the primary interface for natural language queries versus more structured sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the search functionality and return format (tools with names and descriptions), but lacks details about authentication requirements, rate limits, error handling, or performance characteristics. The description adds some context about when to use it, but doesn't fully compensate for the absence of annotations regarding operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and well-structured in two sentences. The first sentence explains the core functionality, and the second provides crucial usage guidance. Every word earns its place with no redundancy or unnecessary elaboration, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search functionality with 2 parameters) and the absence of both annotations and output schema, the description does a good job of explaining what the tool does and when to use it. However, it lacks details about the return format structure, error conditions, or authentication requirements that would be helpful for a search tool. The description is complete enough for basic understanding but has some gaps in operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain query formatting best practices or limit implications). With high schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding beyond the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes from siblings by focusing on tool discovery rather than advice-related functions, making the purpose unambiguous and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific threshold (500+ tools) and context (finding tools for a task). It implicitly distinguishes from sibling tools by focusing on tool discovery rather than advice retrieval, though it doesn't explicitly name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes a memory, implying a destructive mutation, but lacks details on permissions, reversibility, error handling, or side effects. This is a significant gap for a mutation tool without annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's front-loaded with the core action ('Delete') and resource ('stored memory'), making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation tool with no annotations and no output schema, the description is incomplete. It doesn't cover behavioral aspects like confirmation needs, error cases, or return values, leaving the agent under-informed about critical operational details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds minimal value beyond this, but since there are zero parameters with low coverage, the baseline is 4 as per rules for 0 params.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Delete') and target resource ('a stored memory by key'), distinguishing it from sibling tools like 'recall' (likely retrieve) and 'remember' (likely store). It's not a tautology of the name 'forget' and provides concrete operational meaning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), exclusions, or comparisons to siblings like 'recall' (retrieve) or 'remember' (store), leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_adviceBInspect

Get a specific advice slip by its numeric ID (e.g., "42"). Returns the full advice text.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe numeric ID of the advice slip to retrieve.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states it 'retrieves' advice, implying a read-only operation, but doesn't disclose behavioral traits such as error handling (e.g., what happens if the ID doesn't exist), rate limits, authentication needs, or response format. This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is appropriately sized and front-loaded, with zero waste, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple parameter, the description is incomplete. It lacks details on behavioral aspects like error cases or response structure, which are crucial for effective tool use. While concise, it doesn't provide enough context for a tool that might involve retrieval from a database or API.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'id' fully documented in the schema as 'The numeric ID of the advice slip to retrieve.' The description adds no additional meaning beyond this, such as ID range or format details, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('a specific advice slip by its numeric ID'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'random_advice' or 'search_advice' beyond implying it retrieves by ID rather than random or search criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the description's focus on retrieving by ID, suggesting it's for when you know the specific slip ID. However, there's no explicit guidance on when to use this versus alternatives like 'random_advice' or 'search_advice', nor any mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random_adviceBInspect

Get a random piece of advice. Returns the advice text and slip ID for reference or follow-up queries.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the API source ('Advice Slip API') but doesn't describe traits like rate limits, error handling, or response format. This is inadequate for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without any wasted words. It is appropriately sized and front-loaded, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the return value looks like (e.g., format, structure) or provide behavioral context, which is essential for a tool with no structured data support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, earning a baseline score of 4 for not adding unnecessary information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('random piece of advice from the Advice Slip API'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_advice' or 'search_advice', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_advice' or 'search_advice'. It lacks any context about use cases, prerequisites, or exclusions, leaving the agent with minimal direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that memories can be retrieved from current or previous sessions, implying persistence across sessions. However, it doesn't mention rate limits, authentication needs, error conditions, or what happens if a key doesn't exist. The behavioral context is basic but adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with zero waste. The first sentence states the purpose and parameter behavior, the second provides usage context. Every word earns its place, and it's front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with 1 optional parameter and 100% schema coverage, the description is reasonably complete. However, with no output schema and no annotations, it doesn't explain return format (e.g., what a 'memory' contains) or error behavior. Given the low complexity, this is adequate but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'key' parameter fully. The description adds marginal value by clarifying that omitting the key lists all memories, which is implied but not explicitly stated in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' It specifies the verb ('retrieve'/'list') and resource ('memory'), but doesn't explicitly differentiate from sibling tools like 'remember' or 'forget' beyond mentioning retrieval vs. saving.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use it: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It explains the key parameter behavior (omit to list all), but doesn't explicitly state when to use alternatives like 'search_advice' or 'get_advice' for similar retrieval tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context beyond the basic 'store' action: it specifies that authenticated users get persistent memory while anonymous sessions last only 24 hours, which is crucial for understanding data retention. It does not mention rate limits, error conditions, or response format, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and usage, and the second adds critical behavioral context (persistence differences). Every sentence earns its place with no wasted words, making it front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (storage with persistence rules), no annotations, and no output schema, the description is mostly complete. It covers purpose, usage, and key behavioral traits (authentication impact on persistence). However, it lacks details on error handling, return values, or limitations (e.g., size constraints), which could be important for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description does not add any parameter-specific details beyond what the schema provides, such as constraints or formatting rules. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion). It provides concrete examples of what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), providing clear context. However, it does not mention when not to use it or name specific alternatives (e.g., 'recall' for retrieval), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_adviceBInspect

Search for advice by keyword or phrase (e.g., "confidence", "relationships"). Returns matching advice slips with text and IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeyword or phrase to search for within advice text.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the search functionality but lacks details on permissions, rate limits, pagination, response format, or error handling. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core action and resource, making it easy to parse. Every part of the sentence contributes to understanding the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on behavioral traits, usage context, and output, which are important for a search operation. Without annotations or output schema, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'query' fully documented in the schema. The description adds no additional meaning beyond what the schema provides, such as search syntax or examples. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('search') and resource ('advice slips'), and specifies the search scope ('containing a specific keyword or phrase'). It distinguishes from siblings like 'get_advice' (likely retrieve by ID) and 'random_advice' (random selection), but doesn't explicitly name them. This makes the purpose clear but not fully differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_advice' or 'random_advice'. It doesn't mention any prerequisites, exclusions, or contextual factors that would help an agent decide between these tools. Usage is implied only by the action described.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.