Skip to main content
Glama

Server Details

Trivia MCP — wraps Open Trivia Database (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-trivia
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: list_categories enumerates available categories, get_category_stats provides statistics for a specific category, and get_questions retrieves questions with filtering options. There is no overlap or ambiguity in their functions.

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern (list_categories, get_category_stats, get_questions), using snake_case throughout. The naming is predictable and readable.

Tool Count5/5

With 3 tools, the server is well-scoped for its trivia domain, covering essential operations like listing categories, getting stats, and fetching questions. Each tool earns its place without being too sparse or bloated.

Completeness5/5

The tool set provides complete coverage for the trivia domain: listing categories, retrieving statistics, and fetching questions with filters. There are no obvious gaps, and agents can perform typical trivia-related tasks without dead ends.

Available Tools

3 tools
get_category_statsAInspect

Get the total and per-difficulty question counts for a specific category.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryYesCategory ID. Use list_categories to get available IDs.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool retrieves counts, which suggests a read-only operation, but does not disclose behavioral traits such as error handling, performance characteristics, or whether it requires authentication. This leaves significant gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose without unnecessary words. It effectively communicates the tool's function in a compact form.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. It covers the purpose but lacks details on output format, error cases, or behavioral context, which would be needed for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'category' parameter as a number ID with a reference to list_categories. The description adds no additional parameter details beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'total and per-difficulty question counts for a specific category.' It distinguishes from siblings by focusing on statistics rather than listing categories (list_categories) or retrieving questions (get_questions).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for statistical analysis of a category, and the schema references list_categories to get IDs, providing some context. However, it lacks explicit guidance on when to use this tool versus alternatives like get_questions for detailed question data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_questionsAInspect

Get trivia questions from the Open Trivia Database. Optionally filter by category, difficulty, and question type.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoQuestion type. One of: multiple (multiple choice), boolean (true/false).
amountNoNumber of questions to return. Defaults to 10. Max 50.
categoryNoCategory ID to filter by. Use list_categories to get available IDs.
difficultyNoDifficulty level. One of: easy, medium, hard.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions optional filtering and the source (Open Trivia Database), but does not cover important behavioral aspects such as rate limits, authentication needs, error handling, or response format. The description adds some context but leaves gaps in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, consisting of two efficient sentences that directly state the tool's purpose and optional features. Every sentence earns its place with no wasted words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and filtering options but lacks details on behavioral traits like rate limits or response structure. Without annotations or output schema, more context would be beneficial for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already fully documents all four parameters. The description adds minimal value by listing the filterable fields (category, difficulty, type) without providing additional syntax or format details beyond what the schema provides. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'trivia questions from the Open Trivia Database', making the purpose specific and unambiguous. It distinguishes itself from sibling tools like 'get_category_stats' and 'list_categories' by focusing on retrieving questions rather than category information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage by mentioning optional filtering parameters (category, difficulty, type), but does not explicitly state when to use this tool versus alternatives like 'list_categories' for category IDs. It implies usage for retrieving questions with filters, but lacks explicit exclusions or comparisons to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesAInspect

List all available trivia categories and their IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool lists categories and IDs, indicating a read-only operation, but does not add behavioral traits such as rate limits, pagination, or error handling. The description is accurate but lacks depth beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose ('List all available trivia categories and their IDs') with zero waste. Every word earns its place, making it efficient and easy to understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is complete enough for a basic list operation. It specifies what is listed (categories and IDs), but lacks details on output format or behavioral context, which is acceptable for this low-complexity tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not add parameter details, which is appropriate. A baseline of 4 is applied as it compensates for the lack of parameters by clearly stating the tool's function without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all available trivia categories and their IDs') with the exact resource ('trivia categories'), distinguishing it from siblings like 'get_category_stats' (which focuses on statistics) and 'get_questions' (which retrieves questions). It uses precise verbs and specifies the output format (IDs included).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it lists 'all available' categories, suggesting it's for retrieving a comprehensive list. However, it does not explicitly state when to use this tool versus alternatives like 'get_category_stats' (e.g., for detailed stats) or 'get_questions' (e.g., for fetching questions), nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.