Skip to main content
Glama

chucknorris

Server Details

Chuck Norris MCP — wraps chucknorris.io (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-chucknorris
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: joke_by_category fetches jokes from a specific category, list_categories returns available categories, random_joke provides a random joke, and search_jokes finds jokes by keyword. There is no overlap or ambiguity between these functions.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case: joke_by_category, list_categories, random_joke, and search_jokes. The naming is predictable and readable throughout.

Tool Count5/5

With 4 tools, this server is well-scoped for its purpose of accessing Chuck Norris jokes. Each tool earns its place by covering distinct aspects: listing categories, fetching random jokes, searching, and filtering by category.

Completeness5/5

The tool surface is complete for the domain of joke retrieval. It covers all essential operations: listing categories, getting random jokes, searching by keyword, and filtering by category, with no obvious gaps or dead ends.

Available Tools

4 tools
joke_by_categoryBInspect

Get a random Chuck Norris joke from a specific category.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryYesCategory to fetch a joke from. Use list_categories to see valid values.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool fetches a 'random' joke, implying non-deterministic behavior, but doesn't disclose other traits like error handling (e.g., what happens if the category is invalid), rate limits, authentication needs, or response format. The description is minimal and lacks essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any fluff or redundancy. It is appropriately sized and front-loaded, with every word contributing to clarity, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (a fetch operation with a parameter), no annotations, and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., error handling, randomness), output format, or usage distinctions from siblings. The description alone is insufficient for an agent to fully understand how to invoke and interpret results from this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'category' fully documented in the schema, including its type and a note to use 'list_categories' for valid values. The description adds no additional parameter semantics beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without adding value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('a random Chuck Norris joke') with specific scope ('from a specific category'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'random_joke' (which might fetch jokes without category filtering) or 'search_jokes' (which might allow keyword searches).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a joke from a specific category is needed, and the input schema references 'list_categories' to see valid values, providing some contextual guidance. However, it doesn't explicitly state when to use this tool versus alternatives like 'random_joke' (for any random joke) or 'search_jokes' (for keyword-based searches), nor does it mention any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List all available Chuck Norris joke categories.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'List all available' but doesn't disclose behavioral traits such as whether this is a read-only operation, if it requires authentication, rate limits, or the format of the returned categories (e.g., list of strings). For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose ('List all available Chuck Norris joke categories') with zero wasted words. It's appropriately sized for a simple tool, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally complete—it states what it does. However, it lacks details on return values (since no output schema) and behavioral context, which could help an agent understand the result format or usage constraints, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add param info, which is appropriate, earning a baseline score of 4 since it doesn't need to compensate for any schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List') and resource ('all available Chuck Norris joke categories'), distinguishing it from siblings like joke_by_category (fetches jokes in a category), random_joke (gets a random joke), and search_jokes (filters jokes). It precisely defines what the tool does without redundancy.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it lists categories, suggesting it's for retrieving category options before using tools like joke_by_category. However, it lacks explicit guidance on when to use this versus alternatives (e.g., if you need categories for filtering) or any exclusions, leaving usage context inferred rather than stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

random_jokeBInspect

Get a random Chuck Norris joke.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states what the tool does but fails to disclose behavioral traits such as whether it requires authentication, rate limits, or what the output format looks like (e.g., text string, structured data). This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It is front-loaded with the core purpose and appropriately sized for a simple tool, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., joke text, metadata), behavioral aspects like error handling, or how it differs from siblings beyond the 'random' hint. For a tool in this context, more detail is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't mention parameters, aligning with the schema. Baseline is 4 for zero parameters, as it avoids unnecessary detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('a random Chuck Norris joke'), making the purpose immediately understandable. It distinguishes from sibling tools like 'joke_by_category' and 'search_jokes' by specifying 'random' selection, though it doesn't explicitly contrast with 'list_categories'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a random joke is needed, but provides no explicit guidance on when to choose this tool over alternatives like 'joke_by_category' or 'search_jokes'. It lacks any mention of prerequisites, exclusions, or comparative contexts with siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_jokesCInspect

Search Chuck Norris jokes by keyword.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesKeyword or phrase to search for within joke text.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. It states the tool searches jokes but doesn't cover aspects like rate limits, authentication needs, response format, or pagination. This leaves significant gaps in understanding how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core purpose, making it easy to parse quickly. Every part of the sentence contributes essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete for effective tool use. It doesn't explain what the search returns (e.g., list of jokes, metadata), how results are formatted, or any behavioral constraints. For a search tool with no structured context, this leaves critical gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'query' fully documented in the schema. The description adds no additional parameter details beyond what the schema provides, such as search syntax or examples. This meets the baseline for high schema coverage but doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Search') and resource ('Chuck Norris jokes') with a specific mechanism ('by keyword'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'joke_by_category' or 'random_joke', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'joke_by_category' or 'random_joke'. It mentions searching by keyword but doesn't clarify scenarios where this is preferred over other methods, leaving the agent without contextual usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.