argo-mcp
Server Details
MCP server for Argo RPG Platform — connects AI assistants to campaign data via OAuth2
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Argo-RPG-Platform/MCP
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 61 of 61 tools scored. Lowest: 2.4/5.
Each tool targets a distinct resource and action, with clear domain separation (campaign, guild, mnemon types, forum, friends). Even with 61 tools, the descriptions and naming make it easy to distinguish between them.
All tools use a consistent verb_noun pattern in snake_case (e.g., create_campaign, list_mnemons, update_session). The verbs are uniform and descriptive, with no mixing of styles.
With 61 tools, the count is well above the recommended range for a coherent server. While the domain is complex, this many tools can overwhelm an agent and suggest some could be consolidated.
The tool surface has significant gaps, notably missing delete operations for core entities like mnemon entries, sessions, and guild campaigns. This will likely cause agent failures when users expect to remove resources.
Available Tools
61 toolsaccept_friend_requestAInspect
Accept an incoming friend request from the given user.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | Argo user ID of the counterparty. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| status | Yes | |
| senderId | Yes | |
| createdAt | No | |
| updatedAt | No | |
| receiverId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate it is a write operation (readOnlyHint=false) and non-destructive (destructiveHint=false). The description adds no further behavioral details, such as idempotency, error conditions, or side effects. Given annotations, the description provides minimal extra context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded. It contains no superfluous information and is appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, schema coverage, output schema presence, and annotations, the description is largely complete. It could mention that the request must exist and be from that user, but this is implied by the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and the parameter description ('Argo user ID of the counterparty') is clear. The tool description adds no additional meaning beyond what the schema already provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('accept') and the resource ('incoming friend request'). It distinguishes from sibling tools like cancel_friend_request, reject_friend_request, and send_friend_request by specifying the exact action on received requests.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It does not specify that this should be used only for incoming requests that exist, nor does it mention reject_friend_request or cancel_friend_request as alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_campaign_to_guildAInspect
Add a campaign to a guild. Any active member of the guild can do this; the calling user must be the campaign's GM (enforced server-side).
| Name | Required | Description | Default |
|---|---|---|---|
| guildId | Yes | Guild ID. | |
| campaignId | Yes | Campaign ID to add to the guild. |
Output Schema
| Name | Required | Description |
|---|---|---|
| guildId | Yes | |
| success | Yes | |
| campaignId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds authorization details beyond annotations (readOnlyHint=false, destructiveHint=false), explaining that the user must be the campaign's GM and that any active member can initiate the action. This provides useful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—two sentences that convey all necessary information without any wasted words. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, annotations, and parameter coverage, the description provides complete context including purpose, authorization requirements, and behavioral traits. No gaps remain for an agent to understand correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides descriptions for both parameters with 100% coverage. The tool description does not add additional meaning or context about the parameters, so it meets the baseline but does not exceed it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add a campaign to a guild') with a specific verb and resource, distinguishing it from sibling tools like 'add_co_gm' or 'add_guild_calendar_event'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies that any active member can perform the action and that the caller must be the campaign's GM, providing clear context. It does not explicitly mention when not to use it or suggest alternative tools, but the constraints are well-stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_co_gmAInspect
Add a user as an assistant GM (co-GM) of a campaign. Owner-only — the calling user must be the campaign's primary GM. Maximum 5 co-GMs per campaign.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | Argo user ID of the user to promote to co-GM. Must be an existing user. | |
| campaignId | Yes | Campaign ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| ruleSystem | No | |
| campaignName | Yes | |
| gameMasterId | Yes | |
| gameSystemSlug | No | |
| coGameMasterIds | No | |
| campaignDescription | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate mutation (readOnlyHint=false) and non-destructiveness. Description adds ownership requirement and limit, providing behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, no unnecessary words. Purpose and constraints are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers the action, ownership, and limit. With an output schema present, return values are handled. Could mention error conditions or success behavior, but adequate for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. Description adds no new information about parameters beyond what is in the schema, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action 'Add a user as an assistant GM (co-GM) of a campaign', distinguishing it from sibling tools like list_co_gms and remove_co_gm. Includes resource and verb.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies owner-only condition and maximum 5 co-GMs, giving clear context. Does not explicitly state when not to use or provide alternatives, but the constraints are clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_guild_calendar_eventAInspect
Add a new event to the guild's shared calendar. Owner/Admin only. startDateTime / endDateTime are ISO-8601 (e.g. 2026-06-12T19:00:00).
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Event title. | |
| guildId | Yes | Guild ID. | |
| description | No | Optional event description. | |
| endDateTime | No | Event end, ISO-8601 — optional. | |
| startDateTime | Yes | Event start, ISO-8601 (e.g. 2026-06-12T19:00:00). |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations show readOnlyHint=false and destructiveHint=false. Description adds role restriction and date format, but omits potential side effects like notifications or overwriting. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and restriction. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, description covers purpose, role, and date format. Could mention time zone handling, but adequate for a creation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline is 3. Description repeats ISO-8601 format already in schema and notes endDateTime optional, adding minimal new meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Add a new event to the guild's shared calendar' specifying verb and resource. Distinguishes from siblings, as no other tool creates calendar events. Also notes owner/admin restriction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Owner/Admin only', indicating who can use it. Provides ISO-8601 format example. However, does not mention when not to use or alternatives (none exist).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cancel_friend_requestADestructiveInspect
Cancel a friend request you previously sent.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | Argo user ID of the counterparty. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| status | Yes | |
| senderId | Yes | |
| createdAt | No | |
| updatedAt | No | |
| receiverId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true, so the description adds minimal additional behavioral insight beyond confirming it cancels a request. 'Previously sent' adds some context, but no extra details about irreversible effects or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, clear sentence with no unnecessary words. It is well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a simple one-parameter tool with an output schema and clear annotations, the description is complete enough to use this tool correctly. No additional information is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'userId' has a description in the schema ('Argo user ID of the counterparty'), which is sufficient. The tool description does not add further meaning beyond what the schema provides, so score is baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Cancel a friend request you previously sent.' The verb 'cancel' and resource 'friend request' are specific. The sibling tools include send, accept, reject, and list requests, so this tool's purpose is clearly distinguished.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage only for requests the user previously sent, distinguishing from received requests. It does not explicitly state when not to use it or provide alternatives, but the context is clear given sibling tool names like accept_friend_request and reject_friend_request.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_archive_mnemonsAInspect
Create Archive mnemons (archived lore that is no longer current). Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, so a create operation is expected and consistent. The description adds behavioral context: these are archived (not current) mnemons and the call is restricted to GMs. This goes beyond the annotations, though it does not detail side effects (e.g., whether previous versions are overwritten) or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loading the core purpose and then adding the critical access restriction. Every word adds value; there is no fluff or repetition. The structure is clear and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of many sibling create tools, the description specifies the unique 'archive' nature and GM-only access. However, it does not explain what 'archive' means in practice (e.g., does it set a flag or move data?), nor does it mention that it requires a campaignId or that items must contain blocks. The output schema exists but is not referenced. This leaves some ambiguity about the tool's exact behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has very low description coverage (0% per context). While the nested block schema has descriptions, the top-level parameters (campaignId and items) lack descriptions. The tool description does not compensate by explaining the parameters or their roles, leaving the agent to infer from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create' and the resource 'Archive mnemons', with a parenthetical explanation that these are 'archived lore that is no longer current'. This immediately distinguishes it from sibling create tools like create_lore_mnemons or create_custom_mnemons by specifying the 'archive' variant and its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly restricts usage: 'Players may not call this — GM/co-GM only.' This provides clear when-not-to-use guidance for players. However, it does not explicitly state when to use this tool versus alternatives like update_archive_mnemons or other create tools, nor does it describe prerequisites or contexts beyond the GM restriction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_campaignAInspect
Create a new Argo campaign. The current user becomes GM and the calling token gains read+write access to the new campaign immediately (no re-consent needed). Requires the campaign.create OAuth scope, granted at consent time.
| Name | Required | Description | Default |
|---|---|---|---|
| ruleSystem | Yes | Rule system the campaign uses. E.g. 'Dungeons & Dragons 5e', 'Pathfinder 2e', 'Forbidden Lands'. Free-form; the WebAPI derives the slug from this. | |
| description | Yes | Short description of the campaign's setting, tone, and premise. | |
| campaignName | Yes | Display name of the campaign. | |
| gameSystemSlug | No | Optional explicit slug for the public URL (e.g. 'dnd5e'). If omitted, the server derives one from ruleSystem. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| ruleSystem | No | |
| accessLevel | No | |
| campaignName | Yes | |
| gameMasterId | Yes | |
| gameSystemSlug | No | |
| coGameMasterIds | No | |
| campaignDescription | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds key behavioral details: the calling token gains immediate read+write access, no re-consent needed, and the user becomes GM. Annotations only indicate non-readonly, non-destructive, non-open-world, so description significantly enhances transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each serving a distinct purpose: definition and behavioral/auth context. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema available, description need not explain return values. It covers all critical aspects: action, side effects, and required permissions. Sufficient for tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage of parameter descriptions. The description adds minimal extra context (e.g., 'the WebAPI derives the slug from this' for ruleSystem), so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create a new Argo campaign', specifying the verb and resource. This distinguishes it from sibling tools like update_campaign or list_campaigns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: the user becomes GM and needs the campaign.create OAuth scope. It implies when to use, though it does not explicitly state when not to use or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_custom_mnemonsAInspect
Create custom-typed mnemons (any free-form entry that doesn't fit the other types). Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=false and destructiveHint=false, indicating mutation but non-destructive. The description adds that only GM/co-GM can call it, which is useful. However, it lacks details on side effects, success/failure behavior, or rate limits. With annotations present, the description adds some value but not enough for a higher score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short sentences that front-load the purpose and the access constraint. Every sentence is necessary and adds value, with no extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is moderately complex with nested block structures, but the description does not mention prerequisites (e.g., campaign must exist), error conditions, or the 50-item limit which is in the schema. Since there is an output schema, the description does not need to explain returns, but it lacks enough context for a new agent to confidently use it in all scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% top-level parameter descriptions, but the nested 'items' property has a description in the schema. The tool description does not add any extra meaning beyond the schema for the parameters campaignId and items. Baseline 3 is appropriate given the schema coverage and lack of added semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the verb 'Create', the resource 'custom-typed mnemons', and the scope 'any free-form entry that doesn't fit the other types'. This differentiates it from sibling tools like create_location_mnemons, providing strong purpose clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states 'Players may not call this — GM/co-GM only', which gives a clear usage constraint. It also implies when to use this tool (for entries not fitting other types), but does not explicitly list when not to use it or name alternatives, which would earn a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_journal_mnemonsBInspect
Create Journal mnemons (log of in-world events). Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=false) already indicate this is not read-only. The description adds the permission constraint but provides no further behavioral details such as error states, idempotency, or what happens on duplicate entries. For a mutation tool, this is insufficient disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys essential information without redundancy. It is front-loaded with the action and resource, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the complexity of the input schema (nested blocks with multiple types), the description provides no guidance on how to structure the items or what content is expected. The tool has an output schema, but the description does not hint at the return value. This leaves the agent with insufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has many descriptions on nested properties (e.g., title, blocks), so the description does not add significant meaning beyond the schema. The top-level parameter 'campaignId' lacks a description, and the description does not compensate for that. Baseline 3 is appropriate given the schema's richness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action ('Create'), the resource ('Journal mnemons'), and provides context ('log of in-world events'). It also distinguishes that only GM/co-GM can call, which differentiates it from other create mnemon tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a clear usage restriction (GM/co-GM only), which is helpful for deciding who should invoke the tool. However, it does not specify when to use this over other create mnemon tools (e.g., create_lore_mnemons, create_quest_mnemons), leaving a gap in guidance for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_location_mnemonsAInspect
Create Location mnemons (places — cities, dungeons, taverns). Use create_mnemon_relationship with PARENT_OF to nest larger places under one another after creation. Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, destructiveHint=false, consistent with a creation operation. Description adds the role restriction ('Players may not call this — GM/co-GM only'), which is beyond annotations. No other behavioral traits disclosed, but acceptable given the tool's nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, each adding value: purpose, post-creation action, and caller restriction. No fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides adequate context for a creation tool with an output schema. Covers purpose, follow-up action, and permissions. Lacks detail on input structure, but schema descriptions partially fill the gap. Could be more complete by summarizing items parameter type.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description does not add any parameter meaning beyond schema; it omits explanation of campaignId and items. With 0% schema description coverage, the description should compensate but fails to do so. Schema itself has some nested descriptions, but top-level params remain undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Create Location mnemons' with examples (cities, dungeons, taverns), distinguishing it from sibling tools like create_npc_mnemons or create_quest_mnemons. Also specifies the allowed caller (GM/co-GM only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use: for creating location mnemons. Advises using create_mnemon_relationship with PARENT_OF for nesting, and restricts usage to GM/co-GM. Lacks explicit when-not to use, but implied by sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_lore_mnemonsAInspect
Create Lore mnemons (world background, factions' beliefs, history). Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false, destructiveHint=false. Description adds that only GM/co-GM can call, providing access control behavior. No other behavioral traits are disclosed, but this is a meaningful addition.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no extraneous words. Purpose and access restriction are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having output schema, the description is too brief for a complex tool with nested items and block types. It does not cover the requirement for at least one block, or the options for image blocks, leaving the agent with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so description must compensate. It does not mention campaignId or items, nor explain the block structure. The only hint is 'Lore mnemons' implying items are lore entries, which is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool creates Lore mnemons, specifically for world background, factions' beliefs, and history. It also explicitly restricts usage to GM/co-GM, distinguishing it from other mnemons like player, npc, or quest.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides context that lore mnemons are for world-building and restricts to GM/co-GM, but lacks explicit guidance on when to use this vs. other create_*_mnemons tools. However, the specificity of 'world background, factions' beliefs, history' helps differentiate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_mnemon_relationshipAInspect
Create a relationship between two mnemon entries. All 7 labels: MEMBER (NPC ∈ Faction, bidirectional), ALLY (bidirectional), ENEMY (directional), RIVAL (directional), PARENT_OF (Location hierarchy — sourceEntryId is the outer/larger place, e.g. Region → City → District → Tavern), CONTAINS (Location → NPC present there), LOCATED_IN (NPC → Location; inverse of CONTAINS). sourceEntryId is the 'from' side; targetEntryId is the 'to' side — direction matters. Call describe_mnemon_types for the full valid (sourceType, label, targetType) matrix. For faction membership prefer memberNpcEntryIds / affiliationEntryIds on the NPC itself.
| Name | Required | Description | Default |
|---|---|---|---|
| color | No | ||
| label | Yes | ||
| direction | No | ||
| campaignId | Yes | ||
| sourceEntryId | Yes | ||
| targetEntryId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| color | No | |
| label | Yes | |
| sourceId | Yes | |
| targetId | Yes | |
| direction | No | |
| relationshipId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are minimal (readOnlyHint=false, destructiveHint=false, openWorldHint=false). The description adds significant behavioral context: directionality (source is 'from', target is 'to'), label semantics (e.g., PARENT_OF for location hierarchy), and the fact that direction matters. It does not mention error conditions or uniqueness constraints, but covers the core behavior well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense but well-organized: purpose first, then label details, then usage tips. Every sentence contributes information. It is not overly long, though it could be slightly more concise without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (7 directional labels, relationship between mnemon entries), the description covers purpose, label semantics, direction, and cross-references describe_mnemon_types for valid combinations. An output schema exists, so return values are documented separately. It lacks discussion of potential constraints or errors, but is mostly complete for a tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains sourceEntryId and targetEntryId as 'from' and 'to' sides, and fully describes the label enum with examples and direction. However, it omits the optional 'color' and 'direction' parameters, leaving them unexplained. Overall, it adds significant value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Create a relationship between two mnemon entries,' which is a specific verb+resource. It then enumerates all 7 labels with directional semantics, clearly distinguishing this tool from siblings like describe_mnemon_types or update_mnemons.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context, explicitly stating when to prefer alternative fields (e.g., 'For faction membership prefer memberNpcEntryIds / affiliationEntryIds on the NPC itself') and directing users to describe_mnemon_types for valid type-label-type combos. It does not explicitly list when not to use this tool, but the guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_npc_mnemonsAInspect
Create NPC mnemons (FACTION or INDIVIDUAL). npcType is REQUIRED on each item. Use memberNpcEntryIds (on FACTIONs) and affiliationEntryIds (on INDIVIDUALs) to wire membership; the server projects into MEMBER relationships. Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a non-readonly, non-destructive write operation. The description adds that the 'server projects into MEMBER relationships,' revealing an important behavioral detail beyond annotations. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise at two sentences, front-loading the core purpose and immediately following with crucial usage and authorization info. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (two NPC types, numerous parameters, authorization constraints), the description covers key aspects: required per-item field (npcType), relationship wiring, and access restriction. The presence of an output schema further reduces the need to describe return values, making the description complete for agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already contains detailed descriptions for most parameters. The description adds minimal extra meaning by clarifying that memberNpcEntryIds and affiliationEntryIds are for 'wiring membership,' slightly extending schema info. With 0% description coverage, baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates NPC mnemons of type FACTION or INDIVIDUAL, distinguishing it from other create_* siblings by specifying the entity type. It uses a specific verb (Create) and resource (NPC mnemons), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use memberNpcEntryIds vs affiliationEntryIds for wiring relationships, and explicitly states 'Players may not call this — GM/co-GM only,' guiding authorization. However, it does not explicitly compare against alternatives like other create tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_player_mnemonsAInspect
Create Player mnemons (party root, character notes, party notes). For playerKind=CHARACTER, supply parentEntryId (the PARTY mnemon), partyId (CampaignParty.id), and characterId (SessionCharacter id) or the entry will be auto-detached. Players with campaign.write may call this for a party they belong to; GMs may call for any party.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations show readOnlyHint=false and destructiveHint=false, and the description is consistent, noting the tool creates entries. It adds behavioral context such as auto-detachment if required IDs are missing, which goes beyond the annotation hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, then providing critical usage details. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated), the description covers purpose, key parameters, permissions, and auto-detachment behavior. It fully enables an agent to determine when and how to invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although the input schema has detailed parameter descriptions (covering most fields), the description adds value by explaining the conditions under which parameters are needed (e.g., 'For playerKind=CHARACTER, supply parentEntryId...'), clarifying usage beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create Player mnemons (party root, character notes, party notes).' It specifies the verb 'create' and the resource 'Player mnemons,' distinguishing it from sibling mnemon creation tools like create_custom_mnemons or create_journal_mnemons.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit guidance: for playerKind=CHARACTER, supply parentEntryId, partyId, and characterId. It also states permissions: 'Players with campaign.write may call this for a party they belong to; GMs may call for any party.' This provides clear context for when to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_quest_mnemonsAInspect
Create Quest mnemons. questStatus is one of active|completed|failed. Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a write operation (readOnlyHint=false) and non-destructive (destructiveHint=false). The description adds the GM-only authorization detail. No other behavioral traits like side effects or conflict handling are disclosed, but the basics are covered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two tightly written sentences with no superfluous information. It is front-loaded with the primary action and immediately provides key constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the input schema (nested items, blocks, many optional fields), the description is too sparse. It leaves out an overview of the items array structure and content blocks, requiring the agent to rely heavily on the schema alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has high description coverage (>80%) with many field descriptions. The tool description only mentions questStatus values, which are already in the schema. Thus, it adds minimal semantic value beyond the schema, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create Quest mnemons', specifying the verb and resource. It distinguishes from other create tools by focusing on quest mnemon types, and includes the questStatus enum and access restriction, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Players may not call this — GM/co-GM only', providing a clear when-not-to-use condition. However, it does not give guidance on when to use this tool versus other create mnemon tools (e.g., create_archive_mnemons), which are siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sessionAInspect
Schedule a campaign session. Provide an ISO-8601 startAt; endAt is optional. Useful for laying out planned arcs or recurring play nights.
| Name | Required | Description | Default |
|---|---|---|---|
| endAt | No | Session end time as an ISO-8601 instant. | |
| title | Yes | Session title (e.g. 'Session 12: The Dragon's Lair'). | |
| startAt | Yes | Session start time as an ISO-8601 instant (e.g. '2026-06-01T19:00:00Z'). | |
| campaignId | Yes | Campaign ID. | |
| description | No | Optional session description / GM notes. | |
| invitedUserIds | No | User IDs to invite (must be active campaign members). |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| endAt | No | |
| title | Yes | |
| guildId | No | |
| startAt | Yes | |
| createdAt | No | |
| updatedAt | No | |
| campaignId | Yes | |
| description | No | |
| invitedUserIds | No | |
| createdByUserId | No | |
| invitedPartyIds | No | |
| attendanceReplies | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no helpful annotations (all false), the description carries the full burden of disclosure. It states 'Schedule' (creation) but does not mention side effects, permission requirements, idempotency, or what happens on success. No behavioral traits beyond the basic action are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences, front-loaded with the action, and no redundant words. Every sentence adds value—purpose and usage note.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 6 parameters and an output schema, the description is brief but covers the core scheduling function. It omits any mention of workflow (e.g., ensuring campaign exists, creation effect on other data). The output schema may provide return details, but the description lacks completeness for full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameter documentation is thorough. The description adds marginal value by reinforcing the ISO-8601 format for startAt and noting endAt is optional, but does not provide new semantic information beyond what is already in the input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Schedule a campaign session.' It uses a specific verb and resource, distinguishing it from siblings like list_sessions, update_session, and get_session. The added context about laying out planned arcs or recurring play nights reinforces its use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides basic usage guidance: 'Provide an ISO-8601 startAt; endAt is optional.' It implies the tool is for scheduling, but lacks explicit when-to-use or when-not-to-use guidance compared to alternatives like update_session. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_session_summary_mnemonsAInspect
Create SessionSummary mnemons (structured summaries of game sessions). Players may not call this — GM/co-GM only.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate a write operation (readOnlyHint=false) and non-destructive nature. The description adds the valuable behavioral constraint that only GMs can call this, going beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each essential. First sentence defines the core function, second adds important usage guidance. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex nested schema (blocks array) and the presence of an output schema, the description covers the key aspects: what it creates and who can use it. It does not explain the structure of items, but the schema's own descriptions handle that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is high (many fields have descriptions in the schema), so the description does not need to compensate. The description adds no new parameter information, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the specific action (create) and resource (SessionSummary mnemons) with context (structured summaries of game sessions). Differentiates from sibling tools like create_journal_mnemons by naming the specific mnemonic type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly restricts usage to GM/co-GM only, providing clear access control. However, it does not explain when to use this vs. update_session_summary_mnemons or other create tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_mnemon_relationshipCDestructiveInspect
Delete a relationship by id.
| Name | Required | Description | Default |
|---|---|---|---|
| campaignId | Yes | ||
| relationshipId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| success | Yes | |
| campaignId | Yes | |
| relationshipId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already set destructiveHint=true, so the destructive nature is known. The description adds no additional behavioral details, such as whether the operation is reversible, what side effects occur, or what happens when the relationship does not exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is succinct and minimally impacts token usage. However, it could be more informative without harming conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema and annotations, the description still fails to provide sufficient context about the required parameters and the overall effect. The agent might incorrectly assume only one identifier is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should compensate by explaining parameters. It does not clarify what campaignId is used for (e.g., scoping the relationship) or how relationshipId relates. The phrase 'by id' is insufficient for an agent to understand the parameter roles.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Delete) and the resource (relationship), but the phrase 'by id' is ambiguous given the requirement for both campaignId and relationshipId. It does not explicitly differentiate from sibling tools like create_mnemon_relationship or list_mnemon_relationships, though the verb 'delete' implies distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, such as unlisting or archiving relationships. No prerequisites or context about user permissions or ownership are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
describe_mnemon_typesARead-onlyInspect
Returns a catalog of all mnemon types, their type-specific fields, and the full valid relationship matrix (sourceType → label → targetType). Call this before create_mnemon or create_mnemon_relationship when you are unsure which type or label to use. NPC subtype is strictly FACTION | INDIVIDUAL.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| types | Yes | |
| blockOps | Yes | |
| htmlFormat | Yes | |
| commonFields | Yes | |
| idReferences | Yes | |
| relationships | Yes | |
| relationshipLabels | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds value by specifying the catalog content and NPC subtype restriction, enhancing behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose, second gives usage guidance and constraint. No superfluous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and an output schema, the description sufficiently covers what the tool returns and when to use it. The NPC subtype constraint adds necessary context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. Description does not need to add parameter info. Baseline for zero parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns a catalog of all mnemon types, including type-specific fields and relationship matrix. It distinguishes from sibling create/update tools by specifying it should be called before creation when unsure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to call before create_mnemon or create_mnemon_relationship when unsure about type or label. Also provides a constraint on NPC subtype, aiding correct usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_create_topicAInspect
Create a new forum topic (bug report, feature request, or general discussion). Always call forum_search first to check for duplicates. Call forum_list_categories to get the correct categoryId.
| Name | Required | Description | Default |
|---|---|---|---|
| raw | Yes | Topic body in Markdown. | |
| title | Yes | Topic title. Keep it concise and descriptive. | |
| categoryId | Yes | Numeric category ID. Call forum_list_categories first if unsure. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| topic_id | No | |
| username | No | |
| topic_slug | No | |
| post_number | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=false, destructiveHint=false) already indicate non-read-only and non-destructive. The description adds context about duplicate checking and category retrieval, but omits details on behavior if duplicate or error conditions. Still good coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose. Every word earns its place. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter creation tool with output schema, the description covers prerequisites and usage context. No gaps evident.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all 3 parameters (100%). Description adds value: clarifies `raw` is Markdown, advises concise title, and reminds to use forum_list_categories for categoryId. Provides helpful guidance beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Create a new forum topic' with explicit categories (bug report, feature request, or general discussion). Distinguishes from sibling tools like forum_search, forum_list_categories, and forum_reply.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisites: 'Always call forum_search first to check for duplicates' and 'Call forum_list_categories to get the correct categoryId.' This tells when to use and what to do before using.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_get_latest_topicsARead-onlyInspect
Get the latest active topics across all forum categories.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| topic_list | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the tool is safe. The description adds no further behavioral context (e.g., ordering, limit). For a simple tool, the description's value beyond annotations is limited.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of 10 words that directly states purpose. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, annotations provided, and an output schema exists, the description covers the essential purpose. However, it could mention what the output contains or any default limits.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters (schema coverage 100%), so the description cannot add parameter meaning. Per rubric, 0 parameters gives baseline 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get the latest active topics across all forum categories,' specifying a specific action (get), resource (latest active topics), and scope (all categories). This distinguishes it from sibling tools like forum_list_topics and forum_read_topic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when needing recent active topics, but does not explicitly state when not to use or mention alternatives among siblings like forum_search or forum_list_topics.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_get_notificationsARead-onlyInspect
Get the current user's forum notifications (replies, mentions, likes).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| notifications | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive. Description adds minimal behavioral context beyond listing notification types. No mention of pagination or filtering, but no parameters exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action, every word contributes. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple with no parameters and an output schema present. Description adequately covers purpose and content scope given the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so description need not add parameter info. Baseline 4 applies as tool is parameterless and description does not mislead.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Get', resource 'forum notifications', and specifies types (replies, mentions, likes). It distinguishes from sibling tools like forum_get_user_posts or forum_read_topic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when/when-not instructions, but the narrow scope and context signal make usage clear. Implied by name and description; no alternatives needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_get_user_postsARead-onlyInspect
List topics created by the current user on the forum.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| topic_list | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint true and destructiveHint false, so the safety profile is covered. The description adds no behavioral context beyond annotations (e.g., authentication, rate limits, or scope). No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 10 words, precisely conveys purpose without waste. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no return specification (output schema exists), and annotations covering safety, the description is largely complete. However, it could hint at authentication requirements or scope (e.g., current user's forum) that might be assumed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% schema coverage. Description adds no param info, but none is needed. Baseline for 0 params is 4, and description does not detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and resource 'topics created by the current user on the forum'. It distinguishes from sibling tools like forum_list_topics or forum_get_latest_topics by specifying user-specific scope, though it does not explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Siblings like forum_list_topics (all topics) or forum_get_latest_topics exist but are not mentioned. The description implies usage for user's topics but lacks explicit when-not or comparative context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_list_categoriesARead-onlyInspect
List all Discourse forum categories at community.argo.games. Call this first when the user wants to post a bug report or feature request — you need the categoryId to create a topic.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| category_list | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds useful context: it lists ALL categories and is a prerequisite for topic creation, beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences: first states the action and scope, second gives usage context. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with an output schema, the description adequately covers purpose and usage. No gaps given the simplicity of the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% schema description coverage. With no parameters, there is nothing to add; baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists all Discourse forum categories at a specific domain, and explicitly ties it to needing the categoryId for creating a topic, distinguishing it from siblings like forum_create_topic or forum_list_topics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: call first when user wants to post a bug report or feature request. Does not include exclusions, but the positive instruction is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_list_topicsARead-onlyInspect
List topics in a specific forum category. Use forum_list_categories to get category slugs and IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| categoryId | Yes | Numeric category ID. | |
| categorySlug | Yes | Category slug (e.g. 'bug-reports'). |
Output Schema
| Name | Required | Description |
|---|---|---|
| topic_list | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds no extra behavioral context such as return volume, sorting, or pagination, but it does not contradict the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loaded with the action and resource, and every word contributes value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple nature of the tool, the presence of an output schema, and complete schema coverage, the description provides all necessary context for an agent to select and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with clear explanations for categoryId and categorySlug. The description does not add further parameter meaning beyond the schema, so a baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List topics' and the resource 'forum category', with a direct pointer to sibling tool forum_list_categories for obtaining category slugs and IDs, differentiating it from other forum tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises using forum_list_categories to get the required categorySlug and categoryId parameters, providing clear context on preparation and when this tool should be invoked.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_read_topicARead-onlyInspect
Read the full content of a forum topic including all posts and replies.
| Name | Required | Description | Default |
|---|---|---|---|
| topicId | Yes | Numeric Discourse topic ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| slug | Yes | |
| title | Yes | |
| post_stream | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, which align with the description's read action. The description does not add further behavioral details such as authentication requirements, rate limits, or potential side effects. With annotations covering the core safety profile, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear, front-loaded sentence with no unnecessary words. It efficiently conveys the tool's purpose without fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, annotations for safety, and an output schema (which presumably details return structure), the description is mostly complete. It could mention potential prerequisites (e.g., topic existence), but for a simple read operation, the current description is adequately informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already describes topicId as 'Numeric Discourse topic ID' with 100% coverage. The description does not add additional meaning to the parameter beyond what the schema provides. Baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Read'), the resource ('forum topic'), and the scope ('full content including all posts and replies'). It effectively distinguishes this tool from siblings like forum_list_topics (which lists topics) and forum_reply (which adds replies).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for retrieving the complete content of a specific topic, but it does not explicitly state when to use it versus alternatives (e.g., forum_get_latest_topics for a list of topics). No explicit when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_replyBInspect
Reply to an existing forum topic.
| Name | Required | Description | Default |
|---|---|---|---|
| raw | Yes | Reply body in Markdown. | |
| topicId | Yes | Numeric Discourse topic ID to reply to. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| topic_id | No | |
| username | No | |
| topic_slug | No | |
| post_number | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds no behavioral details beyond the basic operation. Annotations indicate it is a write operation (readOnlyHint=false) but not destructive. No mention of side effects, permissions, or rate limits. The openWorldHint is present but unexplained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff, front-loaded purpose statement. Very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple write tool; however, missing context about authentication requirements, return value (though output schema exists), and potential error cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters (raw, topicId) with descriptions. The description repeats no additional parameter info. Schema coverage is 100%, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (reply) and the resource (existing forum topic). It is distinct from sibling tools like forum_create_topic and forum_read_topic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Does not mention prerequisites or context such as needing to be logged in or having permission.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forum_searchARead-onlyInspect
Search forum topics and posts. Supports Discourse search syntax: #category-slug to filter by category, @username to filter by author. Always search before creating a bug report or feature request to avoid duplicates.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | Search query. Supports Discourse search syntax (e.g. #category, @username). |
Output Schema
| Name | Required | Description |
|---|---|---|
| posts | No | |
| topics | No | |
| grouped_search_result | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. The description adds value by detailing supported search syntax (#category-slug, @username), which is beyond the basic read-only hint. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place. The first states the purpose, the second provides usage guidance. No fluff, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description need not explain return values. It covers syntax and usage context adequately. Could mention pagination or search limits but not necessary for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description for 'q' already lists syntax support, but the description reinforces it with concrete examples. With 100% schema coverage, the description adds meaningful context for using the parameter effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Search' on resource 'forum topics and posts', clearly distinguishing from sibling tools like forum_list_topics or forum_read_topic. It also provides context on supporting Discourse syntax, adding depth.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Always search before creating a bug report or feature request to avoid duplicates', giving clear usage context. While it doesn't directly contrast with siblings, the advice is actionable and appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_campaignARead-onlyInspect
Retrieve details of an Argo campaign (name, description, rule system, co-GMs).
| Name | Required | Description | Default |
|---|---|---|---|
| campaignId | Yes | The ID of the campaign to retrieve. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| ruleSystem | No | |
| campaignName | Yes | |
| gameMasterId | Yes | |
| gameSystemSlug | No | |
| coGameMasterIds | No | |
| campaignDescription | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the safety profile is clear. The description adds the specific fields returned (name, description, rule system, co-GMs), but does not mention authentication requirements or other behavioral notes beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that immediately conveys the tool's purpose and the type of data returned. No redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, output schema present), the description is complete. It clearly states what the tool does and what data it provides, without needing to explain return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'campaignId', with a clear description in the schema. The tool description adds no additional semantic context for the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Retrieve') and a clear resource ('Argo campaign'), and lists the details returned (name, description, rule system, co-GMs). It distinguishes from sibling 'list_campaigns' by implying this retrieves full details of a specific campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when you need full details of a single campaign, but does not explicitly state when to use versus alternatives like 'list_campaigns', nor does it provide prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_guildARead-onlyInspect
Retrieve full details of a guild (members, campaigns, calendar metadata).
| Name | Required | Description | Default |
|---|---|---|---|
| guildId | Yes | Guild ID to retrieve. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| members | No | |
| ownerId | Yes | |
| summary | No | |
| campaignIds | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only and non-destructive. The description adds value by listing what data is returned (members, campaigns, calendar metadata), complementing the annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the action and resource. Every word is necessary and there is no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema present and clear annotations, the description covers the essential aspects. Minor omission: no mention of required permissions or the format of guildId, but overall sufficient for tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'guildId' is fully described in the input schema (100% coverage). The description adds no additional meaning beyond the schema, so baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb 'Retrieve' and specifies the resource 'full details of a guild' including specific components. This distinguishes it from sibling tools like list_guilds or get_campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching guild details but does not explicitly state when to use this tool versus alternatives like get_campaign or list_guild_members. No when-not or exclusion conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_mnemonARead-onlyInspect
Get the full details of a specific mnemon entry (title, blocks, type properties).
| Name | Required | Description | Default |
|---|---|---|---|
| entryId | Yes | Mnemon entry ID (hex) or exact title. | |
| campaignId | Yes | Campaign ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| type | Yes | |
| title | Yes | |
| blocks | Yes | |
| entryId | Yes | |
| typeProperties | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. Description confirms the read-only nature ('Get the full details'), which aligns with annotations. No additional behavioral details are needed, but no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with clear verb, resource, and scope. No unnecessary words; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (read-only, 2 params). Schema fully describes parameters, annotations cover safety, and output schema exists. Description is complete enough for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both parameters (entryId and campaignId), including acceptable formats. Description does not add any further semantics beyond what the schema provides, so baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description specifies 'Get the full details of a specific mnemon entry' with explicit fields (title, blocks, type properties). This clearly distinguishes it from siblings like list_mnemons (list all) and update_mnemons_content (update).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage when needing full details of one mnemon, but provides no explicit when-to-use or when-not-to-use guidance, nor alternatives. Siblings include list_mnemons for listing, but no exclusion criteria are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sessionARead-onlyInspect
Get details of a single campaign session.
| Name | Required | Description | Default |
|---|---|---|---|
| sessionId | Yes | Session ID. | |
| campaignId | Yes | Campaign ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| endAt | No | |
| title | Yes | |
| guildId | No | |
| startAt | Yes | |
| createdAt | No | |
| updatedAt | No | |
| campaignId | Yes | |
| description | No | |
| invitedUserIds | No | |
| createdByUserId | No | |
| invitedPartyIds | No | |
| attendanceReplies | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, so the agent knows it's safe. The description adds no behavioral context beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no redundant information, achieving high conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get-by-ID operation, the description is sufficient. The presence of an output schema covers return details, requiring no additional explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with basic parameter descriptions. The tool description adds no extra meaning beyond the schema, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves details of a single campaign session. It uses a specific verb-resource combination and is distinguishable from sibling tools like list_sessions, create_session, and update_session.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for fetching a specific session given IDs, but provides no explicit guidance on when to use this tool versus alternatives, nor any exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
invite_guild_memberAInspect
Invite a user to join the guild. Owner/Admin only.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | Argo user ID to invite. | |
| guildId | Yes | Guild ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| userId | Yes | |
| guildId | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate write operation (readOnlyHint=false) and non-destructive nature (destructiveHint=false). Description adds valuable behavioral context: the permission requirement ('Owner/Admin only'). It does not detail success/failure behavior, but overall transparency is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no filler. First sentence states the action, second explains the permission constraint. Every part earns its place; it is optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple invite tool with an existing output schema, the description is complete enough. It covers the essential action and permission. It could optionally mention the invitation process or prerequisites, but is not critically lacking.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema description coverage is 100% with clear descriptions for both parameters. The description adds no additional information about parameters, per the rule that baseline is 3 when schema coverage is high.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: inviting a user to join a guild. It specifies the action ('invite') and the resource ('guild'), and distinguishes itself from sibling tools like 'remove_guild_member' or 'set_guild_member_role' through the 'Owner/Admin only' permission constraint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly advises usage only when the user has Owner/Admin role, but does not explicitly mention when not to use or provide alternative tools. However, given context of siblings, this is a minor gap; the core condition is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
invite_user_by_emailADestructiveInspect
Send Argo email invitations to up to 20 addresses on behalf of the current user. Recipients receive a sign-up link. No campaign or guild context is required.
| Name | Required | Description | Default |
|---|---|---|---|
| emails | Yes | Email addresses to invite. Up to 20 per call. Each address that already corresponds to an Argo user will be skipped server-side. |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description explains the action (send invitations) and that existing users are skipped, but does not address implications of destructiveHint=true or other side effects beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with 1 parameter and output schema, the description covers all necessary context: action, recipient count, and behavior for existing users.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds value by explaining the limit (up to 20) and server-side skipping of existing users.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Send Argo email invitations' with specific verb and resource, and distinguishes from sibling tools like invite_guild_member by noting no campaign/guild context required.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description says 'No campaign or guild context is required,' implying when to use, but does not explicitly mention alternatives or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_campaignsARead-onlyInspect
List all Argo campaigns the current grant token has access to, including the access level ("read" or "read+write") for each. Call this first when the user has not provided a campaign ID — it returns all campaign IDs and names you can then use with other tools. In responses, refer to campaigns by campaignName — never expose the id field to the user.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| idMap | Yes | |
| campaigns | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds that it lists all accessible campaigns with their access levels, which is consistent and adds context without contradicting annotations. No additional behavioral quirks mentioned, but sufficient given annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each serving a distinct purpose: purpose, usage, and output handling. No wasted words, well-structured, and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, existing annotations, and existing output schema, the description covers purpose, usage context, and output interpretation. It tells the agent what to expect (campaign IDs, names, access levels) and how to present results, making it complete for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has no parameters (0 params), and schema description coverage is 100%. Description adds no parameter info but none needed. Baseline for 0 params is 4, and description does not detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists Argo campaigns with access levels, distinguishing it from siblings like get_campaign. It specifies the verb 'list', the resource 'campaigns', and includes details about access levels, making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to call this when the user has not provided a campaign ID, and that it returns IDs and names for use with other tools. Also instructs to refer to campaigns by campaignName and not expose id, providing clear when-to-use and how-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_co_gmsARead-onlyInspect
List the assistant GMs (co-GMs) of a campaign.
| Name | Required | Description | Default |
|---|---|---|---|
| campaignId | Yes | Campaign ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, indicating a safe read operation. The description's use of 'List' aligns with this. However, no additional behavioral details (e.g., impact, permissions) are provided; the description adds minimal value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no redundant information. Every word contributes to the purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (as indicated by context signals), the description does not need to elaborate on return values. For a straightforward list operation, the description is adequate, though it could mention what fields are returned (e.g., user names).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single parameter 'campaignId' described as 'Campaign ID.' The description does not add any further meaning or context to the parameter, so it meets the baseline but provides no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action ('List') and the resource ('assistant GMs of a campaign'). It distinguishes itself from sibling tools like 'add_co_gm' and 'remove_co_gm' by stating it lists co-GMs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., add_co_gm, remove_co_gm). The description does not mention scenarios or prerequisites for listing co-GMs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_friendsARead-onlyInspect
List the current user's accepted friends.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description's mention of 'accepted friends' adds minimal behavioral context beyond the safety profile. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no parameters, an output schema exists (per context), and the description fully explains the output: a list of accepted friends. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema coverage, the description does not need to add parameter information. It correctly implies no inputs are required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description precisely states 'List the current user's accepted friends', using a specific verb and resource. It clearly distinguishes from sibling tools like list_received_friend_requests and list_sent_friend_requests.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool over alternatives, though the sibling tool names hint at the distinction. No when-not-to-use or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_guild_membersARead-onlyInspect
List the members of a guild (id, role, status, invitedAt, joinedAt).
| Name | Required | Description | Default |
|---|---|---|---|
| guildId | Yes | Guild ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and non-destructive behavior. Description adds no behavioral details beyond listing returned fields, such as pagination or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with verb, no unnecessary words. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (1 param, no enums, output schema present), description covers the basics. Could mention pagination but not required.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description merely restates the need for a guild ID without adding new context. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (list) and resource (guild members), and lists the fields returned, distinguishing it from other list tools like list_campaigns or list_guilds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied but no explicit guidance on when to use vs alternatives, such as invite_guild_member or remove_guild_member. No exclusions or context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_guildsARead-onlyInspect
List the guilds the current user belongs to, with role (Owner/Admin/Member), member count, and campaign count. Requires the guild.read scope. In responses, refer to guilds by name — never expose guildId to the user.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| idMap | Yes | |
| guilds | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds behavioral details: required scope, specific return data, and a privacy rule for the assistant. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short sentences: first states purpose and output, second provides a usage guideline. No unnecessary words, efficiently front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and the existence of an output schema, the description covers all essential aspects: purpose, return data, required scope, and a critical behavioral rule. It is sufficiently complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, and schema coverage is 100%, so no parameter information is needed. The description compensates by adding context about scope and response rules, meeting the baseline expectation for parameterless tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (List guilds user belongs to) and specifies the returned fields (role, member count, campaign count). It distinguishes from sibling tools like 'get_guild' (single guild) and 'list_guild_members'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a prerequisite ('Requires the guild.read scope') and a response rule ('never expose guildId'). It does not explicitly compare with alternatives, but given the simple operation and sibling tools, the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_mnemon_relationshipsARead-onlyInspect
List the relationships of a mnemon entry, split into outgoing edges, incoming edges, and a flat list of linked entries (entryId/title/type/relationshipTypes). Use this to find members of a faction, allies/enemies of an NPC, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| entryId | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| linked | Yes | |
| incoming | Yes | |
| outgoing | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false, and the description reinforces a read-only listing operation. Additionally, it details the response structure (outgoing, incoming, flat list with specific fields), adding behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with a use-case suffix. Every word adds value, and it is front-loaded with the core action. No redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has an output schema (not shown) and the description explains the output structure, so return values are covered. However, the parameters are not explained, and given the simplicity of the tool (2 required params), this omission reduces completeness. Sibling tools provide context but are not referenced.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0% description coverage, meaning no parameter descriptions in the schema. The tool description does not explain the two parameters (entryId, campaignId) or their constraints (minLength, required). This is a significant gap, as the description should compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists relationships of a mnemon entry, breaking down outgoing edges, incoming edges, and a flat list. It also provides concrete examples like finding faction members or NPC allies/enemies. This is a specific verb+resource with clear differentiation from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides usage context with examples (find members of a faction, allies/enemies of an NPC), indicating when to use this tool. However, it does not explicitly state when not to use it or compare to alternatives like create_mnemon_relationship or get_mnemon, so some guidance is implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_mnemonsARead-onlyInspect
List mnemon (lore/memory) entries for an Argo campaign. Optional filters: title (case-insensitive substring on entry title only) and type (e.g. NPC, Location, Quest). Returns all matching entries — pagination is automatic. In responses, refer to entries by title — never expose entryId to the user.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Mnemon type filter (NPC, Location, Quest, …). | |
| title | No | Case-insensitive substring filter on title. | |
| campaignId | Yes | ID of the campaign. |
Output Schema
| Name | Required | Description |
|---|---|---|
| idMap | Yes | |
| entries | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds behavioral details: automatic pagination and the directive to never expose entryId to the user. This goes beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose, second lists filters and key behaviors (pagination, response guidance). No redundancy or fluff; information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, existing output schema, and annotations, the description fully covers the tool's purpose, filters, pagination, and response behavior. No notable gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds semantic value: specifies that the title filter is case-insensitive and applies only to the entry title, and gives example values for type (NPC, Location, Quest). These details help the agent use parameters correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'mnemon entries for an Argo campaign'. It distinguishes from siblings like 'get_mnemon' (single entry) and 'list_mnemon_relationships' (relationships) by focusing on listing entries with optional filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on how to use the filters (title as case-insensitive substring, type examples) and instructs the agent to refer to entries by title, avoiding exposure of entryId. However, it does not explicitly mention when to use this tool over alternatives (e.g., get_mnemon for single entry).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_received_friend_requestsARead-onlyInspect
List incoming friend requests awaiting your response.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, so the description's claim of 'List' is consistent with a read operation. The description does not elaborate on behavior beyond listing, but the presence of an output schema (reported) mitigates the need for further detail. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that immediately conveys the tool's purpose. It is front-loaded with the key action and resource, containing no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and an output schema, the description fully captures the necessary context. It informs the agent of the tool's purpose without needing elaboration on return values or side effects, which are covered by the output schema and annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is 100%. The description adds no parameter information, which is acceptable as there are none. Per guidelines, baseline for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists incoming friend requests awaiting a response. This directly addresses the resource (friend requests) and the action (list), and distinguishes it from siblings like 'list_sent_friend_requests' (outgoing) and 'send_friend_request' (creation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use: when you need to view pending incoming friend requests. It does not explicitly state when not to use or mention alternatives, but the tool name and sibling set provide enough context for an agent to infer appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sent_friend_requestsARead-onlyInspect
List outgoing friend requests that are still pending.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adds only the 'pending' state clarification. The behavior is fully captured by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is direct and front-loads the purpose. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple, parameterless tool with an output schema, the description covers all necessary context without gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the description need not add any parameter semantics. Baseline score of 4 applies per rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists outgoing friend requests that are still pending, distinguishing it from siblings like list_received_friend_requests (incoming) and list_friends (accepted).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for reviewing pending sent requests, but does not explicitly state when to use it versus alternatives like cancel_friend_request or send_friend_request. No exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sessionsARead-onlyInspect
List campaign sessions for a given month (defaults to the current month).
| Name | Required | Description | Default |
|---|---|---|---|
| year | No | Calendar year. Defaults to current year. | |
| month | No | Calendar month (1-12). Defaults to current month. | |
| campaignId | Yes | Campaign ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true. The description adds the default behavior (current month) and implies a list return, but does not disclose pagination, ordering, or limits. Adequate but not rich beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no extraneous words. Front-loads the core purpose and scope. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and simple read-only nature, the description is fully adequate. It covers the essential filtering context (month, default) and requires no additional explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all three parameters. The description reinforces the 'given month' and default behavior but adds no new meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists campaign sessions with a monthly filter and defaulting to the current month. It distinguishes from sibling tools like get_session (single) and create_session (creation) via the verb and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., get_session for single session, create_session for creation). The description lacks context about selection criteria or exclusion cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reject_friend_requestADestructiveInspect
Reject an incoming friend request from the given user.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | Argo user ID of the counterparty. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| status | Yes | |
| senderId | Yes | |
| createdAt | No | |
| updatedAt | No | |
| receiverId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description's 'reject' aligns with annotations (destructiveHint: true), but it does not add behavioral context beyond the annotation. No mention of consequences (e.g., permanent removal, notification) or side effects. With annotations present, this is adequate but not enriching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, concise sentence with no redundant words. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, annotations, and full schema coverage, the description is largely complete. However, it could briefly address the outcome or side effects to fully inform the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (userId described as 'Argo user ID of the counterparty'). The description adds no additional parameter meaning beyond the schema, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Reject'), the resource ('incoming friend request'), and the target ('from the given user'), distinguishing it from sibling tools like accept_friend_request and cancel_friend_request.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., accept_friend_request, cancel_friend_request). The description fails to provide context or exclusions, leaving the agent to infer usage solely from the tool name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_co_gmADestructiveInspect
Remove a co-GM from a campaign. Owner-only or self-removal.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User ID of the co-GM to remove. | |
| campaignId | Yes | Campaign ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| userId | Yes | |
| success | Yes | |
| campaignId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide destructiveHint=true, and the description adds behavioral context about access control (owner-only or self-removal). It does not contradict annotations and provides useful additional information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two short sentences that immediately convey the action and restrictions. Every word earns its place, with no unnecessary filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (2 params, output schema present), the description covers purpose and usage adequately. It might miss post-removal details, but for a straightforward removal action, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%—both parameters have clear descriptions. The tool description does not add extra meaning beyond the schema, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Remove a co-GM from a campaign' with specific verb and resource. It also distinguishes from siblings like add_co_gm and list_co_gms by focusing on removal. The owner-only or self-removal clarification adds precision.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies that only the campaign owner or the co-GM themselves can use this tool, providing clear context. However, it does not explicitly mention when not to use it or name alternative tools like remove_guild_member.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_guild_memberADestructiveInspect
Remove a member from the guild. Owner/Admin only.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User ID of the member to remove. | |
| guildId | Yes | Guild ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| userId | Yes | |
| guildId | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, and the description adds the role restriction (Owner/Admin only) beyond what annotations provide, without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single front-loaded sentence with no wasted words, efficiently conveying purpose and usage constraint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple removal action with an output schema, the description covers purpose and access control completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. The description adds no additional parameter detail beyond the schema, meeting baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Remove a member from the guild' with the verb 'Remove' and resource 'member', clearly distinguishing it from sibling tools like 'invite_guild_member'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It specifies 'Owner/Admin only', providing clear prerequisite for use. While it doesn't explicitly mention when not to use or alternatives, the role restriction gives sufficient context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_friend_requestBInspect
Send a friend request to another Argo user.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | Argo user ID of the counterparty. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| status | Yes | |
| senderId | Yes | |
| createdAt | No | |
| updatedAt | No | |
| receiverId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a modifying operation (readOnlyHint=false), but the description does not disclose side effects, permissions needed, or what happens after sending (e.g., creates a pending request). Minimal added value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no extraneous words. Front-loaded with key action and target.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, output schema exists), the description adequately covers the core functionality. No need to explain return values due to output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of the single parameter 'userId' with a clear description. The tool description adds no extra meaning beyond the schema, so baseline score is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Send' and resource 'friend request' with target 'another Argo user'. It clearly distinguishes from sibling tools like 'accept_friend_request' or 'cancel_friend_request'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., cancel_friend_request, accept_friend_request). No prerequisites or conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_guild_member_roleAInspect
Change a guild member's role to Owner, Admin, or Member. Owner/Admin only. Note that promoting another user to Owner transfers the guild — confirm with the user first.
| Name | Required | Description | Default |
|---|---|---|---|
| role | Yes | New role. | |
| userId | Yes | Member to change. | |
| guildId | Yes | Guild ID. |
Output Schema
| Name | Required | Description |
|---|---|---|
| role | Yes | |
| userId | Yes | |
| guildId | Yes | |
| success | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (destructiveHint false), the description reveals critical behavior: promoting to Owner transfers the guild. This adds important context about permissions and irreversible-like consequences.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no fluff. The first sentence states the purpose and options, the second provides a critical usage note. Every sentence is essential.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (3 required params, simple enum, output schema exists), the description is complete: it covers purpose, permissions, and a behavioral warning.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers all parameters with descriptions (100% coverage). The description adds value by explaining the role enum's implications (ownership transfer) and the permission requirement, improving semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Change' and resource 'guild member role', and clearly lists the possible roles. It distinguishes from siblings like `remove_guild_member` or `invite_guild_member` by focusing on role modification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Owner/Admin only' and warns about transferring ownership, providing clear context. However, it does not explicitly mention alternatives when not to use this tool (e.g., use `remove_guild_member` to remove a member).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_archive_mnemonsBInspect
Update typed/meta fields of Archive mnemons.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a mutation (readOnlyHint=false, destructiveHint=false). Description states 'Update' but provides no details on merge vs replace behavior, authorization needs, or side effects beyond schema maxItems. Lacks disclosure for a mutation tool without annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with purpose. No unnecessary words, though could add more value without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has moderate complexity with two required params and an output schema. Description covers basic purpose but lacks usage guidelines and behavioral details. Output schema likely explains returns, but gaps in behavioral and parameter semantics remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description mentions 'typed/meta fields' but does not explain the parameters (items, campaignId) or their semantics beyond what the schema shows. Fails to compensate for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Update' and the resource 'Archive mnemons' with scope 'typed/meta fields'. This distinguishes it from sibling tools that update other mnemon types (e.g., custom, journal).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like update_custom_mnemons or create_archive_mnemons. Missing context on prerequisites or scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_campaignAInspect
Update a campaign's display name and/or description. Both fields optional — only supplied fields are changed; pass an empty string to clear the description. GMs and co-GMs can call this; rule-system swaps remain WebApp-only.
| Name | Required | Description | Default |
|---|---|---|---|
| campaignId | Yes | ID of the campaign to update. | |
| campaignName | No | New display name. Omit to leave unchanged. | |
| campaignDescription | No | New description (setting, tone, premise). Omit to leave unchanged. Pass an empty string to clear the existing description. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| ruleSystem | No | |
| campaignName | Yes | |
| gameMasterId | Yes | |
| gameSystemSlug | No | |
| coGameMasterIds | No | |
| campaignDescription | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate it's a write operation (readOnlyHint=false) and not destructive. The description adds the partial update behavior and clarification about clearing description via empty string. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences: first states purpose, second explains optionality and update behavior, third adds usage restrictions. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple update of two optional fields and presence of an output schema, the description covers all necessary aspects: what updates, how to update partially, permissions, and limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions. The description reinforces that both fields are optional and clarifies the effect of missing vs empty values, adding value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool updates a campaign's display name and/or description. It uses a specific verb ('update') and resource ('campaign'), and differentiates from siblings like create_campaign or get_campaign.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear instructions: both fields are optional, only supplied fields change, empty string clears description. Also specifies who can call (GMs and co-GMs) and what is not possible (rule-system swaps remain WebApp-only). Could explicitly name alternatives but is very helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_custom_mnemonsCInspect
Update typed/meta fields of Custom-typed mnemons.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations show readOnlyHint=false and destructiveHint=false, consistent with update behavior. However, description provides no additional behavioral context such as whether fields are overwritten or merged, if permissions are needed, or any side effects. Minimal disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is concise, but overly sparse. It does not use structure like bullet points or sections to front-load important details. Every word is necessary but more could be added without sacrificing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (array of objects with multiple fields) and many similar siblings, the description is incomplete. No mention of how updates apply (replace vs merge), what the output schema contains, or constraints like maxItems. Annotations are present but minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only mentions 'typed/meta fields' but does not explain any of the parameters (campaignId, items, entryId, tags, title, customType, visibility). Agents receive no semantic help beyond parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Update typed/meta fields of Custom-typed mnemons,' specifying the verb (update), resource (Custom-typed mnemons), and scope (typed/meta fields). This distinguishes it from sibling update tools for other mnemon types and the generic update_mnemons_content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like update_mnemons_content or other type-specific updates. The description fails to mention prerequisites, context, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_journal_mnemonsCInspect
Update typed/meta fields of Journal mnemons.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate a non-destructive write operation. The description adds no further behavioral context, such as the ability to update multiple entries in one call (maxItems 50) or any preconditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (6 words), achieving brevity but at the expense of completeness. It omits important context about the tool's capabilities and parameters, making it borderline insufficient for a tool of this complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with a complex input schema (10+ subfields) and multiple sibling update tools, the description is too sparse. It fails to clarify the purpose relative to other journal or update tools and does not mention key behavioral aspects like batch updating up to 50 items.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate by explaining the parameters. It merely mentions 'typed/meta fields' without detailing which parameters are available or what they mean. Many parameters (e.g., consequenceEntryIds, involvedNpcEntryIds) are non-obvious and left unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (update), the target resource (Journal mnemons), and the scope (typed/meta fields). It effectively distinguishes this tool from sibling tools like update_custom_mnemons or update_mnemons_content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no usage guidelines. It does not explain when to use this tool versus other update mnemon tools (e.g., update_custom_mnemons, update_archive_mnemons). The context of sibling tools and the name imply it is for Journal mnemons, but the description should explicitly state that.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_location_mnemonsCInspect
Update typed/meta fields of Location mnemons.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate non-read-only and non-destructive, which is consistent. However, the description adds no additional behavioral context such as overwrite vs merge behavior, permissions, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with one sentence, but it lacks substance. Front-loading is good, but the sentence does not earn its place as it provides minimal information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (array of objects with multiple fields) and lack of output schema, the description is incomplete. It does not mention return values, constraints (like maxItems 50), or update behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain any parameters (campaignId, items, or the fields within items like tags, title, levelId, visibility). The description fails to add meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it updates typed/meta fields of Location mnemons, distinguishing it from create and content update tools. However, 'typed/meta fields' is somewhat vague without further explanation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like update_custom_mnemons or update_mnemons_content. The description does not provide any context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_lore_mnemonsBInspect
Update typed/meta fields of Lore mnemons.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already indicate the tool is not read-only, not destructive, and not open-world. The description adds minimal behavioral context beyond stating it updates fields, which is consistent with the annotations but does not enrich the agent's understanding of side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that front-loads the action and target. No wasted words, but it could benefit from additional context without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 2 top-level parameters (with complex nested structure in 'items') and an output schema, the description is too minimal. It omits any reference to output, update semantics, or error conditions, leaving the agent with insufficient information for reliable invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage on top-level parameters, and the tool description does not explain any parameter meanings or usage. The property names in the schema are somewhat self-explanatory, but the description fails to add needed clarity, such as the structure of 'items' or constraints on fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update typed/meta fields of Lore mnemons' uses a specific verb ('Update') and resource ('Lore mnemons'), and the sibling tools like 'update_archive_mnemons' and 'update_custom_mnemons' clearly differentiate by type, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'update_mnemons_content' or other typed update tools. There is no mention of prerequisites, scenarios, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_mnemons_contentAInspect
Edit the content blocks of one or more mnemon entries. Each item carries an entryId and an ordered list of ops (append, insertAfter, replace, remove) applied atomically per entry. Block addressing: get block ids from get_mnemon, then target them in replace/remove/insertAfter. New blocks (append, insertAfter, replace) get fresh server-generated UUIDs. Text in block 'text' is HTML — use , , , , ; do NOT use Markdown like 'bold' or '# heading'. Use blockType for paragraph/heading1/heading2/bullet_list/numbered_list/todo/quote/code/callout/divider/image. Inline or is uploaded to the campaign asset bucket and the src is rewritten to asset:. SSRF-blocked / oversize / failed fetches are stripped with a warning. On a bad op (missing blockId, unknown blockType, etc.) the whole entry's batch is rejected with the failedOpIndex; no partial mutation per entry.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses many behavioral traits beyond the minimal annotations: atomicity per entry, block addressing, server-generated UUIDs for new blocks, HTML constraints with allowed tags, image upload behavior with asset rewriting, SSRF blocking, and error handling with rejection on bad ops. This fully compensates for the lack of detailed annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense but every sentence adds value. It front-loads the main purpose, then details block operations, HTML rules, image handling, and error behavior. No extraneous information; it is well-structured and concise for the complexity involved.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple ops, atomicity, error handling), the description covers all necessary aspects: how to address blocks, op types, HTML rules, image upload details, and error rejection. An output schema exists (not shown), so return values are not required. The description is fully sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the input schema has descriptions for many fields (e.g., op, blockType, afterBlockId), the description adds significant meaning by explaining the op types (append, insertAfter, replace, remove) and their purposes, block addressing, and the atomic batch behavior. This goes beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool edits content blocks of mnemon entries. The verb 'Edit' and resource 'content blocks' are specific, and it distinguishes from siblings like get_mnemon (which retrieves blocks) and other update tools that handle metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates usage for modifying content blocks, but does not explicitly state when not to use it or compare to alternatives like update_custom_mnemons. However, it provides clear context by referencing block IDs from get_mnemon and describing the ops, making the intended use apparent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_npc_mnemonsAInspect
Update typed/meta fields of NPC mnemons (visibility, tags, npcType, faction membership, etc.). Does NOT modify content blocks — use update_mnemons_content for that. Set visibility=PUBLIC on multiple NPCs in a single call by listing them in items[].
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that it updates meta fields and does not modify content, adding context beyond annotations. However, it does not elaborate on other behavioral aspects like permissions, side effects, or validation, but annotations already set destructiveHint=false and readOnlyHint=false, so the description adds value without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences: first states the main action, second clarifies exclusion, third provides a usage tip. No redundant words, front-loaded with the primary purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (multiple fields, batch capability, sibling tools), the description covers the essential purpose, usage constraints, and a key usage example. It does not explain return values, but an output schema exists. The description is complete enough for an agent to select and invoke correctly with minimal ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description partially compensates by listing example fields (visibility, tags, npcType) and explaining the items array for batch updates. However, many parameters like title, sheetId, and relationshipIds are not mentioned, leaving gaps in understanding. The description provides enough context for common use but not full semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it updates typed/meta fields of NPC mnemons, lists example fields, and distinguishes from the sibling tool update_mnemons_content by explicitly stating it does not modify content blocks. The verb 'Update' and resource 'NPC mnemons' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use (update meta fields) and when not to use (for content blocks, use update_mnemons_content). It also provides a usage hint about setting visibility=PUBLIC on multiple NPCs, guiding effective invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_player_mnemonsCInspect
Update typed/meta fields of Player mnemons.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate mutation (readOnlyHint=false). Description adds 'Update' but no additional behavioral traits like needed permissions, field constraints, or side effects beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise but overly sparse. It is front-loaded but sacrifices informativeness for brevity; it could usefully expand on field purposes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description omits parameter semantics, constraints, and usage context. Annotations are minimal, and sibling tools provide only indirect context. The description leaves significant gaps for a mutation tool with 2 required params and many optional fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description provides no explanation of any parameters. It mentions 'typed/meta fields' without linking to the actual parameters (tags, title, etc.), leaving agents without semantic understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Update' and the resource 'Player mnemons', with a scope of 'typed/meta fields'. It distinguishes from create and content-update siblings, though 'typed/meta fields' is somewhat vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like update_mnemons_content or other update tools. Lacks context on prerequisites or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_quest_mnemonsAInspect
Update typed/meta fields of Quest mnemons (status transitions, expiry, related entries).
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=false (mutation) and destructiveHint=false. The description adds the types of fields updated but no extra behavioral details like authentication requirements, rate limits, or side effects beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that immediately states the action and scope. It is front-loaded and avoids unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the basic purpose, the tool has a complex input schema with many nested fields in items. No guidance on return values (output schema exists but not described) or edge cases. For a complex update operation, more context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, meaning no inline parameter explanations. The description partially compensates by mentioning 'status transitions, expiry, related entries' which hints at the purpose of items subfields. However, it does not elaborate on campaignId or the full items structure, leaving ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool updates 'typed/meta fields of Quest mnemons' with specific examples like status transitions, expiry, and related entries. This distinguishes it from sibling create/update tools for other mnemonic types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like update_custom_mnemons or update_npc_mnemons. The context is implied by the 'quest' qualifier, but no when-not-to-use or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_sessionAInspect
Reschedule a campaign session or edit its title/description. All fields optional. Owner-only on the backend.
| Name | Required | Description | Default |
|---|---|---|---|
| endAt | No | ISO-8601 instant. | |
| title | No | ||
| startAt | No | ISO-8601 instant. | |
| sessionId | Yes | Session ID. | |
| campaignId | Yes | Campaign ID. | |
| description | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | |
| endAt | No | |
| title | Yes | |
| guildId | No | |
| startAt | Yes | |
| createdAt | No | |
| updatedAt | No | |
| campaignId | Yes | |
| description | No | |
| invitedUserIds | No | |
| createdByUserId | No | |
| invitedPartyIds | No | |
| attendanceReplies | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a non-read-only, non-destructive mutation. The description adds context: owner-only restriction and all fields optional, which clarifies partial update behavior. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, highly concise, front-loaded with purpose. Every word adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description covers the core actions and restrictions. It could mention session existence or field validation, but overall it is sufficient for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67%, with title and description lacking descriptions. The description explicitly connects 'edit its title/description' to those parameters, and 'reschedule' to startAt/endAt, adding meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool does two things: reschedule a campaign session or edit its title/description. It uses specific verbs and objects, differentiating it from siblings like create_session or get_session.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Owner-only on the backend' and 'All fields optional', offering some usage constraints but no explicit comparison to alternatives or when to use this tool versus others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_session_summary_mnemonsCInspect
Update typed/meta fields of SessionSummary mnemons.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| campaignId | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate a non-read-only, non-destructive mutation, consistent with 'update'. However, the description fails to clarify whether updates are partial or full replacements, error handling, or idempotency, which are critical for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (8 words), which is concise, but it lacks structure. It does not use front-loading effectively and omits key details that would make it more useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex input schema with many optional fields and a batch update pattern (maxItems 50), the description is insufficient. It does not mention the batch nature, limits, or what 'typed/meta fields' encompass, leaving significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides no explanation of the parameters (campaignId, items, or the various fields within items). The description adds no value beyond the schema, leaving the agent to guess field purposes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (update) and resource (typed/meta fields of SessionSummary mnemons). It distinguishes from sibling tools like update_mnemons_content and update_session, but 'typed/meta fields' remains slightly vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description does not mention prerequisites or conditions for updating session summary mnemons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!