Skip to main content
Glama
Ownership verified

Server Details

MCP server for the RPG-Schema.org definition and helping the usage of RPG-Schemas in TTRPG manuals

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
add_ontologyBInspect

Register a new baseline ontology by providing its TTL content. The ontology is saved to the catalog and becomes available for inspection, search, and composition.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes
tagsNo
labelNo
descriptionNo
ttl_contentYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adequately discloses persistence ('saved to the catalog') and availability side effects, but omits critical mutation details: error behavior for duplicate slugs, validation failures for invalid TTL, idempotency, or destructive overwrite risks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. It front-loads the action and input requirement in the first sentence, and covers the outcome/availability in the second.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately omits return value details. However, with zero schema parameter descriptions and no annotations, the tool requires more behavioral context (error cases, validation) and parameter semantics to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. While it implicitly explains 'ttl_content' via 'providing its TTL content,' it completely fails to document the other four parameters: 'slug' (unique identifier?), 'tags' (filtering purpose?), 'label' (display name?), and 'description' (internal vs. ontology description?).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Register') and resource ('ontology') along with the input method ('TTL content'), making the core action clear. However, it does not explicitly differentiate from the 'compose' sibling tool, which could also involve ontology creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a workflow by stating the ontology 'becomes available for inspection, search, and composition,' suggesting it should be used before those operations. However, it lacks explicit guidance on when to use this versus the 'compose' tool, or prerequisites like TTL format validation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

composeAInspect

Compose multiple baseline ontologies into a single schema card. Pass a list of ontology slugs. Returns a merged schema card ready to use as the starting point for extraction.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugsYes
target_namespaceNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It clarifies that the tool returns a 'merged schema card' and mentions the domain concept of 'baseline ontologies,' but fails to disclose error handling (what happens if slugs are invalid?), persistence (is the result saved or just returned?), or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three tightly crafted sentences with zero redundancy. Each sentence earns its place: purpose definition, input instruction, and output characterization. It is appropriately front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (so return values needn't be detailed) and the tool's moderate complexity, the description covers the core operation but has significant gaps. The undocumented 'target_namespace' parameter and lack of error context prevent a higher score despite the good output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It successfully explains 'slugs' as 'a list of ontology slugs,' adding domain meaning. However, it completely omits the 'target_namespace' parameter, leaving its purpose (despite having a default value) undocumented. This partial coverage is minimally adequate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool composes/merges 'multiple baseline ontologies into a single schema card,' providing a specific verb and resource. It implicitly distinguishes from siblings like 'add_ontology' (creation) and 'inspect_ontology' (viewing) by describing a merge operation, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context by stating the output is 'ready to use as the starting point for extraction,' hinting at when to use it. However, it lacks explicit guidance on when to choose this over 'add_ontology' or 'inspect_ontology,' and doesn't mention prerequisites like ensuring the slugs exist first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

inspect_ontologyBInspect

Inspect a baseline ontology: returns all classes, datatype properties, and object properties in schema-card format.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the return format ('schema-card format') and read-only nature ('returns'), but fails to mention behavioral risks like large payload sizes when returning 'all' properties or potential latency for complex ontologies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 15 words, front-loaded with the verb. Zero redundancy; every clause provides essential information about the operation or return value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (removing the need to detail return values) and only one parameter, the description covers the core operation. However, the lack of parameter documentation and usage guidelines leaves critical gaps for an agent deciding between this and the 'search_*' siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (the 'slug' parameter is undocumented in the schema). The description implies the slug refers to a 'baseline ontology' but does not explicitly define what the slug represents (identifier, name, URI) or provide examples, leaving the parameter semantics under-specified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Inspect') and resource ('ontology'), specifying it returns classes, datatype properties, and object properties. The term 'baseline' helps distinguish from 'add_ontology' and 'list_ontologies' siblings, though 'inspect' is slightly less precise than 'retrieve'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus siblings like 'search_classes' or 'search_properties'. While the description implies a full schema dump by listing all return types, it doesn't warn when to prefer filtered searches over this comprehensive inspection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_ontologiesAInspect

List all registered baseline ontologies in the catalog.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. The verb 'List' implies a read-only, non-destructive operation, providing basic safety context. However, it omits pagination behavior, filtering capabilities (if any), or specific meaning of 'baseline' ontologies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence where every word serves a purpose. The structure (Verb + Scope + Resource + Location) is optimal for a zero-parameter tool description. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (zero parameters) and existence of an output schema (covering return structure), the description is appropriately complete. It could improve by clarifying if 'all' implies unfiltered pagination or mentioning the relationship to the catalog scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation rules, zero-parameter tools receive a baseline score of 4. The description correctly implies no filtering is applied ('all'), consistent with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('List'), resource ('registered baseline ontologies'), and scope ('in the catalog'). It clearly distinguishes from siblings: 'add_ontology' (add vs list), 'inspect_ontology' (list all vs inspect specific), and search tools (ontologies vs classes/properties).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the verb 'List'—suggesting this is for discovery/catalog overview rather than detailed inspection (inspect_ontology) or modification (add_ontology). However, it lacks explicit when-to-use guidance or warnings about potentially large result sets when using 'all'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_classesAInspect

Search for classes across all registered ontologies. Matches class name or description (case-insensitive substring).

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the matching algorithm (case-insensitive substring matching on name/description fields) and search scope (all registered ontologies). However, it fails to disclose safety characteristics (read-only vs destructive), rate limits, or pagination behavior that would help the agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. It is appropriately front-loaded with the primary action and scope, followed by specific matching behavior details. Every sentence earns its place without redundant phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a simple single-parameter search tool with an output schema present (so return values need not be described), the description covers the essential usage context. However, the complete lack of schema parameter descriptions could have been better compensated for in the description to achieve full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It partially compensates by explaining how the query is interpreted (matches name/description via case-insensitive substring), but does not explicitly reference the 'query' parameter, provide format constraints, or give usage examples that would fully document the parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Search') with clear resource ('classes') and scope ('across all registered ontologies'). It effectively distinguishes from sibling tool 'search_properties' by explicitly targeting 'classes', and implies a broader scope than 'inspect_ontology' by searching across all ontologies rather than inspecting a specific one.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the matching behavior (case-insensitive substring on name/description), which provides implicit guidance on when to use the tool. However, it lacks explicit guidance on when to use this versus 'inspect_ontology' for detailed class information or versus 'search_properties' for property searches, and does not mention any prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_propertiesAInspect

Search for properties (datatype + object) across all ontologies. Matches property name, domain, range, or description.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable behavioral context by specifying which fields are matched (name, domain, range, description). However, lacks disclosure of safety profile (though implied by 'Search'), rate limits, pagination, or whether matching is case-sensitive/fuzzy.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes purpose and scope; second explains matching behavior. Front-loaded with critical information. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for complexity: single parameter tool with output schema (mentioned in context signals). Description covers search semantics adequately without needing to explain return values. Minor gap: could mention if results are ranked or filtered by relevance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (query parameter undocumented), but description compensates effectively by explaining what the query matches against (name, domain, range, description). Would benefit from query syntax hints (e.g., wildcard support), but successfully adds meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Search' + resource 'properties' + scope 'across all ontologies'. Clarifies that 'properties' includes both 'datatype + object' types, distinguishing from sibling search_classes which likely searches classes. The first sentence establishes clear intent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through scope description ('across all ontologies' suggests broad search vs. inspect_ontology's likely specific inspection), but lacks explicit when-to-use guidance or comparison to siblings like search_classes. No mention of when results might be too broad or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources