Skip to main content
Glama

jokes

Server Details

Jokes MCP — wraps JokeAPI v2 (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-jokes
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_joke retrieves a random joke with filters, get_joke_categories lists categories, get_joke_flags lists content filters, and search_jokes searches by keyword. There is no overlap in functionality, making it easy for an agent to select the right tool without confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with underscores (e.g., get_joke, get_joke_categories, get_joke_flags, search_jokes). The verbs 'get' and 'search' are appropriately used and maintain a predictable naming convention throughout the set.

Tool Count5/5

With 4 tools, this server is well-scoped for its purpose of joke retrieval and information. Each tool serves a specific and necessary function (retrieval, category listing, flag listing, and search), with no redundant or missing tools for the domain.

Completeness5/5

The tool surface provides complete coverage for joke-related operations: retrieving jokes with filters, listing categories and flags for navigation, and searching jokes. There are no obvious gaps, as all core functionalities for interacting with a joke API are covered without dead ends.

Available Tools

4 tools
get_jokeAInspect

Get a random joke. Optionally filter by category, type (single-line or two-part), and safe mode.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoJoke type. One of: single, twopart. Omit to allow either type.
categoryNoJoke category. One of: Any, Programming, Misc, Dark, Pun, Spooky, Christmas. Defaults to "Any".
safe_modeNoWhen true, only return jokes that are flagged safe by JokeAPI. Defaults to true.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'safe mode' and filtering options, but lacks details on behavioral traits such as rate limits, authentication needs, error handling, or what 'random' entails (e.g., source, freshness). For a tool with no annotations, this is a significant gap in disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, stating the core purpose first followed by optional features in a single, efficient sentence. Every part earns its place without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (3 optional parameters, no output schema, no annotations), the description is minimally complete but lacks depth. It covers what the tool does and parameters, but without annotations or output schema, it should ideally include more on behavior or results to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (type, category, safe_mode) with descriptions and defaults. The description adds minimal value by listing the parameters but does not provide additional meaning beyond what the schema specifies, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('a random joke'), and distinguishes it from siblings by specifying it's for retrieving a single random joke rather than categories, flags, or search results. It's specific about the core functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning optional filters (category, type, safe mode), but does not explicitly state when to use this tool versus alternatives like 'search_jokes' for non-random queries or 'get_joke_categories' for listing categories. No exclusions or clear context for sibling differentiation are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_joke_categoriesBInspect

List all available joke categories supported by JokeAPI.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool lists categories but does not describe any behavioral traits, such as whether it's a read-only operation, if there are rate limits, or what the output format might be. This is a significant gap for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (simple list operation) but lack of annotations and no output schema, the description is incomplete. It does not explain what the return values look like (e.g., format of categories) or any behavioral context, which is necessary for the agent to use the tool effectively without structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the schema description coverage is 100%, so there is no need for parameter details in the description. The baseline for 0 parameters is 4, as the description appropriately does not add unnecessary parameter information beyond what the schema already covers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all available joke categories') and the resource ('supported by JokeAPI'), making the purpose specific and understandable. However, it does not explicitly differentiate this tool from its siblings (like 'get_joke' or 'search_jokes'), which would require mentioning that this tool retrieves categories rather than jokes themselves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks context about prerequisites, such as whether authentication is needed, or comparisons to sibling tools like 'get_joke' or 'search_jokes' that might also involve categories. This leaves the agent without explicit usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_joke_flagsBInspect

List all available joke flags (content filters) supported by JokeAPI.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It implies a read-only operation ('List') but doesn't disclose behavioral traits such as rate limits, authentication needs, or response format. The description is minimal and lacks context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key information ('List all available joke flags') without any wasted words. It's appropriately sized for a simple tool with no parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks details on usage context, behavioral traits, or output, which could be helpful for an agent despite the simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, aligning with the schema. A baseline of 4 is applied as it compensates adequately for the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('List') and resource ('all available joke flags'), and identifies the domain ('JokeAPI'). It doesn't explicitly differentiate from sibling tools like 'get_joke_categories', but the resource specificity makes the purpose clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_joke' or 'search_jokes'. It mentions the resource but doesn't explain the context or prerequisites for retrieving joke flags versus jokes themselves.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_jokesBInspect

Search for jokes containing a specific keyword or phrase.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeyword or phrase to search for within joke text.
amountNoNumber of jokes to return. Defaults to 5.
categoryNoLimit search to a category. One of: Any, Programming, Misc, Dark, Pun, Spooky, Christmas. Defaults to "Any".
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the search functionality but fails to describe key behaviors: whether results are paginated, sorted, or limited; what happens if no matches are found; if there are rate limits; or what the return format looks like (e.g., list of joke objects). This is a significant gap for a search tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without redundancy. It is appropriately sized for a simple search tool, front-loaded with the core functionality, and contains no wasted words or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with filtering), lack of annotations, and no output schema, the description is incomplete. It omits critical context: behavioral traits (e.g., result limits, error handling), output format, and usage distinctions from siblings. For a tool with three parameters and no structured output documentation, this leaves the agent under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (query, amount, category) with their types, defaults, and constraints. The description adds no parameter-specific information beyond what the schema provides, such as search semantics (e.g., case-sensitivity) or category details. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('search') and resource ('jokes') with a specific scope ('containing a specific keyword or phrase'). It distinguishes from sibling tools like 'get_joke' (which likely fetches a single joke) and 'get_joke_categories' (which lists categories). However, it doesn't explicitly differentiate from 'get_joke_flags' (which might retrieve joke metadata), leaving slight ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for keyword-based searches, but provides no explicit guidance on when to use this tool versus alternatives like 'get_joke' (e.g., for random jokes) or 'get_joke_categories' (e.g., for browsing categories). It lacks any 'when-not-to-use' statements or prerequisites, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.