trivia
Server Details
Trivia MCP — wraps Open Trivia Database (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-trivia
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: list_categories enumerates available categories, get_category_stats provides statistics for a specific category, and get_questions retrieves questions with filtering options. There is no overlap or ambiguity in their functions.
All tools follow a consistent verb_noun naming pattern (list_categories, get_category_stats, get_questions), using snake_case throughout. The naming is predictable and readable.
With 3 tools, the server is well-scoped for its trivia domain, covering essential operations like listing categories, getting stats, and fetching questions. Each tool earns its place without being too sparse or bloated.
The tool set provides complete coverage for the trivia domain: listing categories, retrieving statistics, and fetching questions with filters. There are no obvious gaps, and agents can perform typical trivia-related tasks without dead ends.
Available Tools
3 toolsget_category_statsAInspect
Get the total and per-difficulty question counts for a specific category.
| Name | Required | Description | Default |
|---|---|---|---|
| category | Yes | Category ID. Use list_categories to get available IDs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool retrieves counts, which suggests a read-only operation, but does not disclose behavioral traits such as error handling, performance characteristics, or whether it requires authentication. This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose without unnecessary words. It effectively communicates the tool's function in a compact form.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is minimally adequate. It covers the purpose but lacks details on output format, error cases, or behavioral context, which would be needed for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'category' parameter as a number ID with a reference to list_categories. The description adds no additional parameter details beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'total and per-difficulty question counts for a specific category.' It distinguishes from siblings by focusing on statistics rather than listing categories (list_categories) or retrieving questions (get_questions).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for statistical analysis of a category, and the schema references list_categories to get IDs, providing some context. However, it lacks explicit guidance on when to use this tool versus alternatives like get_questions for detailed question data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_questionsAInspect
Get trivia questions from the Open Trivia Database. Optionally filter by category, difficulty, and question type.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Question type. One of: multiple (multiple choice), boolean (true/false). | |
| amount | No | Number of questions to return. Defaults to 10. Max 50. | |
| category | No | Category ID to filter by. Use list_categories to get available IDs. | |
| difficulty | No | Difficulty level. One of: easy, medium, hard. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions optional filtering and the source (Open Trivia Database), but does not cover important behavioral aspects such as rate limits, authentication needs, error handling, or response format. The description adds some context but leaves gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, consisting of two efficient sentences that directly state the tool's purpose and optional features. Every sentence earns its place with no wasted words, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and filtering options but lacks details on behavioral traits like rate limits or response structure. Without annotations or output schema, more context would be beneficial for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already fully documents all four parameters. The description adds minimal value by listing the filterable fields (category, difficulty, type) without providing additional syntax or format details beyond what the schema provides. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'trivia questions from the Open Trivia Database', making the purpose specific and unambiguous. It distinguishes itself from sibling tools like 'get_category_stats' and 'list_categories' by focusing on retrieving questions rather than category information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage by mentioning optional filtering parameters (category, difficulty, type), but does not explicitly state when to use this tool versus alternatives like 'list_categories' for category IDs. It implies usage for retrieving questions with filters, but lacks explicit exclusions or comparisons to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_categoriesAInspect
List all available trivia categories and their IDs.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool lists categories and IDs, indicating a read-only operation, but does not add behavioral traits such as rate limits, pagination, or error handling. The description is accurate but lacks depth beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the purpose ('List all available trivia categories and their IDs') with zero waste. Every word earns its place, making it efficient and easy to understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no annotations, no output schema), the description is complete enough for a basic list operation. It specifies what is listed (categories and IDs), but lacks details on output format or behavioral context, which is acceptable for this low-complexity tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not add parameter details, which is appropriate. A baseline of 4 is applied as it compensates for the lack of parameters by clearly stating the tool's function without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all available trivia categories and their IDs') with the exact resource ('trivia categories'), distinguishing it from siblings like 'get_category_stats' (which focuses on statistics) and 'get_questions' (which retrieves questions). It uses precise verbs and specifies the output format (IDs included).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating it lists 'all available' categories, suggesting it's for retrieving a comprehensive list. However, it does not explicitly state when to use this tool versus alternatives like 'get_category_stats' (e.g., for detailed stats) or 'get_questions' (e.g., for fetching questions), nor does it mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!