REG-Vault
Server Details
Open retro-gaming metadata catalog — 91,000 games, 99 systems. Box art, manuals, screenshots, gameplay previews. Free MCP + REST API.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 5 of 5 tools scored.
Each tool has a clear, distinct purpose: fetching games, counting by letter, global stats, listing systems, and searching. No overlapping functionality.
All tool names follow a consistent verb_noun pattern (get_game, get_letter_counts, get_stats, list_systems, search_games), making them predictable.
Five tools cover the core functionality of a game database catalogue: browsing systems, searching, and retrieving details, with statistics support. Well-scoped.
Covers browsing (list_systems), searching (search_games, get_letter_counts), and retrieval (get_game). Missing update/add/delete tools, but as a read-only vault this is appropriate.
Available Tools
5 toolsget_gameAInspect
Fetch a single game by system + canonical slug OR ROM hash. Returns title, year, publisher, developer, genre, description, assets.
| Name | Required | Description | Default |
|---|---|---|---|
| system | Yes | ||
| slug_or_hash | Yes | Canonical slug (e.g. 'sonic-the-hedgehog-2') or hex ROM hash. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description partially compensates by explaining the tool is read-only (Fetch) and returns specific fields. However, it does not disclose potential error conditions (e.g., if slug/hash not found) or performance traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with clear structure: action + identifiers + output fields. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (2 required params, no output schema), the description is adequate but could mention error handling or uniqueness assumptions. It covers the basic functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 50% of parameters with descriptions: 'slug_or_hash' has a description explaining its format. The description adds no further parameter details beyond the schema. For two parameters, schema provides adequate coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches a single game using two identification methods (system+slug or ROM hash) and lists returned fields. The verb 'Fetch' combined with specific identification methods distinguishes it from sibling tools like list_systems or search_games.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing a single game's details via slug or hash, but does not explicitly state when not to use it (e.g., for bulk listing use list_systems or search_games). No exclusion conditions or alternatives are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_letter_countsAInspect
Per-letter game count for a system (A-Z + 0-9 + other). Useful for alphabet navigation.
| Name | Required | Description | Default |
|---|---|---|---|
| system | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry behavioral info. Description explains the tool returns counts per character, which is clear. However, it doesn't mention read-only nature, performance implications, or what happens if system is invalid. With no annotations, this is adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Front-loaded with the key purpose. Perfectly concise for the complexity level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter, no output schema, and no nested objects, the description is nearly complete. It explains the return type (counts per character) and a use case. Minor gap: no mention of error behavior if system not found.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'system' is required but not described in schema (coverage 0%). Description mentions 'for a system' but doesn't specify format, examples, or constraints beyond being a string. With only one parameter and implicit meaning, it is minimally adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns per-letter game counts for a system, specifying the character set (A-Z, 0-9, other). The verb 'get' and noun 'letter counts' are specific. However, it does not differentiate from siblings like get_game or list_systems, which perform different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Useful for alphabet navigation, but no explicit when-to-use or when-not-to-use guidance. No comparison to siblings like search_games for filtering. The description implies a use case but doesn't help decide between this and other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsAInspect
Global REG-Vault catalogue stats: total games, systems, assets coverage.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description bears full burden. It states that it returns 'stats' but does not describe the format, whether it requires authentication, rate limits, or if it's read-only. With no annotations, a score of 3 is adequate but lacks detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded. It communicates the tool's purpose and scope with minimal words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no required inputs, and no output schema or annotations, the description is relatively complete for a simple stats tool. However, it does not specify the response format or structure, which may be needed for an agent to parse results effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, so no additional parameter description is needed. The schema coverage is 100%, and the description adds value by explaining the scope ('Global REG-Vault catalogue stats') and categories returned.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides 'Global REG-Vault catalogue stats' and lists three specific coverage areas: total games, systems, assets coverage. This distinguishes it from sibling tools like get_game (single game) and list_systems (list systems).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool for getting aggregate stats, but does not explicitly say when to use it vs siblings or when not to use it. It gives no alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_systemsAInspect
List all retro-gaming systems in REG-Vault with game counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description shares the burden. It mentions 'List' (read-only) and 'with game counts' (additional data), but doesn't disclose rate limits, pagination, or auth needs. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence, no wasted words. It front-loads the verb and resource, and adds essential context (REG-Vault, game counts).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter list tool with no output schema, the description is nearly complete. It tells the agent what the tool returns and its scope. Missing details like pagination or ordering are minor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters and is 100% covered (since it's empty). The description adds value by explaining what data is returned (systems with game counts), which goes beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List', the resource 'retro-gaming systems', and the scope 'in REG-Vault with game counts'. It distinguishes itself from siblings which are about individual games, letter counts, stats, or search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for a broad listing, but does not explicitly state when to use it versus siblings like search_games (which may filter). However, the sibling names suggest distinct use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_gamesBInspect
Search games by title across one or all systems. Returns up to limit results.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | Title substring (case-insensitive) | |
| system | No | Optional system slug (megadrive, snes, psx, ...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries the burden. It mentions case-insensitive search and returning up to `limit` results, but does not disclose other behaviors like pagination, sorting, or what happens if no results. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is short and front-loaded, with a single sentence that conveys core purpose. Efficient, but could be slightly more informative without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool complexity (3 params, no output schema), the description is minimally complete. It explains the search scope and result limits, but lacks info on return format, sorting, or error cases. Adequate for a simple search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67% with two parameters described (query and system have descriptions, limit has default but no description). The description adds 'across one or all systems' context and mentions `limit`, but does not elaborate on syntax or format beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches games by title across one or all systems and returns results up to `limit`. However, it does not differentiate itself from sibling tools like get_game or list_systems, but the purpose is clear enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching by title, but lacks explicit guidance on when to use this vs alternatives like get_game (which likely fetches a single game) or list_systems. No when-not-to-use or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!