Skip to main content
Glama

Server Details

Jokes MCP — wraps JokeAPI v2 (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-jokes
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 9 of 9 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation2/5

The tool set has significant ambiguity, with 'ask_pipeworx' and 'discover_tools' overlapping in purpose as general-purpose query/retrieval tools, while the joke-specific tools are distinct but overshadowed. This creates confusion about whether to use the general tools or the specialized joke tools for joke-related queries.

Naming Consistency3/5

Naming is mixed with no clear pattern: 'ask_pipeworx' and 'discover_tools' use descriptive phrases, while joke tools follow a 'get_joke_*' or 'search_jokes' pattern, and memory tools use simple verbs like 'remember' and 'recall'. This inconsistency makes the set less predictable and harder to navigate.

Tool Count3/5

With 9 tools, the count is borderline for the server's scope. The joke domain is well-covered with 5 tools, but the inclusion of general-purpose and memory tools makes the set feel disjointed and overextended for a server named 'jokes'.

Completeness4/5

For the joke domain, the tool set is nearly complete with retrieval, filtering, categorization, and search capabilities. However, there are minor gaps such as the inability to submit or rate jokes, and the general-purpose tools create an ambiguous overlap that could confuse coverage.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it explains Pipeworx 'picks the right tool, fills the arguments, and returns the result' - describing the agent's decision-making process. However, it doesn't mention limitations like rate limits, authentication needs, or potential failure modes, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: a clear purpose statement, explanation of the automation benefit, and three diverse examples - all in three sentences. Every sentence earns its place by providing distinct value: the what, the how, and concrete illustrations of appropriate use.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema and no annotations, the description provides excellent context about what the tool does and how to use it. The examples effectively demonstrate the expected input format and scope. The only minor gap is lack of information about return values or error conditions, but given the tool's simplicity, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the single parameter 'question' well-documented in the schema. The description adds minimal parameter semantics beyond the schema, only reinforcing that questions should be in 'plain English' or 'natural language' through the examples. This meets the baseline 3 score when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Ask a question', 'get an answer') and resources ('from the best available data source'), distinguishing it from sibling tools like joke-related or memory tools. It explicitly mentions Pipeworx handles tool selection and argument filling, which is unique functionality not implied by the name alone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'No need to browse tools or learn schemas — just describe what you need' establishes when to use this tool versus alternatives. The three concrete examples ('What is the US trade deficit...', 'Look up adverse events...', 'Get Apple's latest...') further illustrate appropriate use cases, making it clear this is for natural language queries rather than structured tool calls.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'the most relevant tools with names and descriptions,' which adds useful behavioral context about the output format. However, it lacks details on performance (e.g., response time), error handling, or authentication needs, leaving some gaps in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, then specifies usage guidelines, all in two concise sentences with zero wasted words. Every sentence earns its place by adding critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a search/discovery function with 2 parameters, no output schema, and no annotations), the description is mostly complete. It covers purpose, usage context, and output format, but lacks details on behavioral aspects like rate limits or error cases, which could be important for a discovery tool in a large catalog.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (query and limit) thoroughly. The description does not add any parameter-specific information beyond what the schema provides (e.g., it doesn't explain query semantics further). Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and explicitly distinguishes it from siblings by mentioning '500+ tools available' which contrasts with the joke-related sibling tools listed. It provides a concrete action and target.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This clearly indicates when to use this tool (for discovery in large catalogs) versus alternatives (implicitly, the sibling tools are for specific joke operations, not discovery).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool deletes a memory, implying a destructive mutation, but lacks critical details: whether deletion is permanent or reversible, what permissions are required, if there are rate limits, or what happens on success/failure. For a destructive operation with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action ('Delete') and immediately specifies the target, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't cover behavioral aspects (permanence, permissions), error conditions, or return values, leaving significant gaps for the agent to operate safely and effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds minimal value beyond this, merely restating 'by key' without explaining key format, source, or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and target resource ('a stored memory by key'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'recall' (likely for retrieving memories) or 'remember' (likely for storing memories), which would require explicit comparison for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory to delete), exclusions, or relationships to sibling tools like 'recall' or 'remember', leaving the agent to infer usage context independently.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_jokeAInspect

Get a random joke, optionally filtered by category (e.g., 'general', 'programming') or type ('single' or 'twopart'). Returns joke text, category, and content flags.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoJoke type. One of: single, twopart. Omit to allow either type.
categoryNoJoke category. One of: Any, Programming, Misc, Dark, Pun, Spooky, Christmas. Defaults to "Any".
safe_modeNoWhen true, only return jokes that are flagged safe by JokeAPI. Defaults to true.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'safe mode' and filtering options, but lacks details on behavioral traits such as rate limits, authentication needs, error handling, or what 'random' entails (e.g., source, freshness). For a tool with no annotations, this is a significant gap in disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, stating the core purpose first followed by optional features in a single, efficient sentence. Every part earns its place without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (3 optional parameters, no output schema, no annotations), the description is minimally complete but lacks depth. It covers what the tool does and parameters, but without annotations or output schema, it should ideally include more on behavior or results to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (type, category, safe_mode) with descriptions and defaults. The description adds minimal value by listing the parameters but does not provide additional meaning beyond what the schema specifies, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('a random joke'), and distinguishes it from siblings by specifying it's for retrieving a single random joke rather than categories, flags, or search results. It's specific about the core functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning optional filters (category, type, safe mode), but does not explicitly state when to use this tool versus alternatives like 'search_jokes' for non-random queries or 'get_joke_categories' for listing categories. No exclusions or clear context for sibling differentiation are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_joke_categoriesBInspect

List all available joke categories (e.g., 'general', 'programming', 'knock-knock'). Use to filter get_joke results.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool lists categories but does not describe any behavioral traits, such as whether it's a read-only operation, if there are rate limits, or what the output format might be. This is a significant gap for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (simple list operation) but lack of annotations and no output schema, the description is incomplete. It does not explain what the return values look like (e.g., format of categories) or any behavioral context, which is necessary for the agent to use the tool effectively without structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the schema description coverage is 100%, so there is no need for parameter details in the description. The baseline for 0 parameters is 4, as the description appropriately does not add unnecessary parameter information beyond what the schema already covers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all available joke categories') and the resource ('supported by JokeAPI'), making the purpose specific and understandable. However, it does not explicitly differentiate this tool from its siblings (like 'get_joke' or 'search_jokes'), which would require mentioning that this tool retrieves categories rather than jokes themselves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks context about prerequisites, such as whether authentication is needed, or comparisons to sibling tools like 'get_joke' or 'search_jokes' that might also involve categories. This leaves the agent without explicit usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_joke_flagsBInspect

List all content filter flags (e.g., explicit, political, racist). Use to understand what filters exclude.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It implies a read-only operation ('List') but doesn't disclose behavioral traits such as rate limits, authentication needs, or response format. The description is minimal and lacks context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key information ('List all available joke flags') without any wasted words. It's appropriately sized for a simple tool with no parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks details on usage context, behavioral traits, or output, which could be helpful for an agent despite the simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, aligning with the schema. A baseline of 4 is applied as it compensates adequately for the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('all available joke flags'), and identifies the domain ('JokeAPI'). It doesn't explicitly differentiate from sibling tools like 'get_joke_categories', but the resource specificity makes the purpose clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_joke' or 'search_jokes'. It mentions the resource but doesn't explain the context or prerequisites for retrieving joke flags versus jokes themselves.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the core functionality (retrieve by key or list all) and persistence across sessions, but doesn't mention error handling, performance characteristics, or what happens when a non-existent key is requested. It provides basic context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first explains the core functionality, the second provides usage context. There is zero wasted language, and information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema and no annotations, the description provides adequate context about what the tool does and when to use it. However, it doesn't describe the return format (what a 'memory' looks like when retrieved) or error conditions, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so the baseline is 3. The description adds meaningful context by explaining the semantic effect of omitting the key parameter ('omit to list all keys'), which clarifies the tool's dual behavior beyond what the schema alone provides. This elevates the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from sibling tools like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'to retrieve context you saved earlier in the session or in previous sessions.' It also specifies when to omit the key parameter ('omit key to list all keys'), giving clear operational instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context beyond the basic storage action. It discloses persistence traits ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which are critical for understanding data longevity and session management. However, it doesn't cover potential limitations like storage size or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two concise sentences that directly convey the tool's purpose and key behavioral details. Every sentence earns its place by providing essential information without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (storage with session-based persistence), no annotations, and no output schema, the description is mostly complete. It covers what the tool does, usage context, and persistence behavior, but lacks details on return values or error handling. For a tool with no structured output, more on expected responses would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples and types. The description does not add any additional meaning or semantics beyond what the schema provides, such as constraints on key formats or value encoding. The baseline score of 3 reflects adequate but no extra parameter insight.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (retrieval) and 'forget' (deletion). It explicitly mentions what gets stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but it does not explicitly mention when not to use it or name alternatives. For example, it doesn't contrast with 'recall' for retrieval or specify if this is the only way to store data in this system.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_jokesBInspect

Search jokes by keyword or phrase. Returns matching jokes with categories, types, and content flags.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeyword or phrase to search for within joke text.
amountNoNumber of jokes to return. Defaults to 5.
categoryNoLimit search to a category. One of: Any, Programming, Misc, Dark, Pun, Spooky, Christmas. Defaults to "Any".
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the search functionality but fails to describe key behaviors: whether results are paginated, sorted, or limited; what happens if no matches are found; if there are rate limits; or what the return format looks like (e.g., list of joke objects). This is a significant gap for a search tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without redundancy. It is appropriately sized for a simple search tool, front-loaded with the core functionality, and contains no wasted words or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with filtering), lack of annotations, and no output schema, the description is incomplete. It omits critical context: behavioral traits (e.g., result limits, error handling), output format, and usage distinctions from siblings. For a tool with three parameters and no structured output documentation, this leaves the agent under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (query, amount, category) with their types, defaults, and constraints. The description adds no parameter-specific information beyond what the schema provides, such as search semantics (e.g., case-sensitivity) or category details. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('search') and resource ('jokes') with a specific scope ('containing a specific keyword or phrase'). It distinguishes from sibling tools like 'get_joke' (which likely fetches a single joke) and 'get_joke_categories' (which lists categories). However, it doesn't explicitly differentiate from 'get_joke_flags' (which might retrieve joke metadata), leaving slight ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for keyword-based searches, but provides no explicit guidance on when to use this tool versus alternatives like 'get_joke' (e.g., for random jokes) or 'get_joke_categories' (e.g., for browsing categories). It lacks any 'when-not-to-use' statements or prerequisites, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.