Skip to main content
Glama

advice

Server Details

Advice MCP — wraps Advice Slip API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-advice
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.1/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_advice retrieves a specific slip by ID, random_advice fetches a random slip, and search_advice finds slips by keyword. There is no overlap in functionality, making it easy for an agent to select the correct tool for any query.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_advice, random_advice, search_advice) with clear verbs and the same noun. This uniformity makes the tool set predictable and easy to understand.

Tool Count5/5

With 3 tools, this server is well-scoped for its purpose of accessing advice slips. Each tool serves a distinct and necessary function (retrieve by ID, random retrieval, and search), and there are no extraneous tools, making the count perfectly appropriate.

Completeness5/5

The tool set provides complete coverage for the Advice Slip API domain, offering all core operations: retrieving specific slips, getting random slips, and searching. There are no gaps, as these tools cover the typical use cases without dead ends.

Available Tools

3 tools
get_adviceBInspect

Get a specific advice slip by its numeric ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesThe numeric ID of the advice slip to retrieve.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states it 'retrieves' advice, implying a read-only operation, but doesn't disclose behavioral traits such as error handling (e.g., what happens if the ID doesn't exist), rate limits, authentication needs, or response format. This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is appropriately sized and front-loaded, with zero waste, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple parameter, the description is incomplete. It lacks details on behavioral aspects like error cases or response structure, which are crucial for effective tool use. While concise, it doesn't provide enough context for a tool that might involve retrieval from a database or API.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'id' fully documented in the schema as 'The numeric ID of the advice slip to retrieve.' The description adds no additional meaning beyond this, such as ID range or format details, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('a specific advice slip by its numeric ID'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'random_advice' or 'search_advice' beyond implying it retrieves by ID rather than random or search criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the description's focus on retrieving by ID, suggesting it's for when you know the specific slip ID. However, there's no explicit guidance on when to use this versus alternatives like 'random_advice' or 'search_advice', nor any mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random_adviceBInspect

Get a random piece of advice from the Advice Slip API.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the API source ('Advice Slip API') but doesn't describe traits like rate limits, error handling, or response format. This is inadequate for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without any wasted words. It is appropriately sized and front-loaded, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the return value looks like (e.g., format, structure) or provide behavioral context, which is essential for a tool with no structured data support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, earning a baseline score of 4 for not adding unnecessary information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('random piece of advice from the Advice Slip API'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_advice' or 'search_advice', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_advice' or 'search_advice'. It lacks any context about use cases, prerequisites, or exclusions, leaving the agent with minimal direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_adviceBInspect

Search for advice slips containing a specific keyword or phrase.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeyword or phrase to search for within advice text.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes the search functionality but lacks details on permissions, rate limits, pagination, response format, or error handling. For a search tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core action and resource, making it easy to parse. Every part of the sentence contributes to understanding the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on behavioral traits, usage context, and output, which are important for a search operation. Without annotations or output schema, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'query' fully documented in the schema. The description adds no additional meaning beyond what the schema provides, such as search syntax or examples. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('search') and resource ('advice slips'), and specifies the search scope ('containing a specific keyword or phrase'). It distinguishes from siblings like 'get_advice' (likely retrieve by ID) and 'random_advice' (random selection), but doesn't explicitly name them. This makes the purpose clear but not fully differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_advice' or 'random_advice'. It doesn't mention any prerequisites, exclusions, or contextual factors that would help an agent decide between these tools. Usage is implied only by the action described.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.