STAS Running Coach for Claude
Server Details
Connect Claude to your Intervals.icu watch data for fitness, workout review, and plan writing.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 9 of 9 tools scored.
Each tool has a clearly distinct purpose with no overlap: create_note_event and create_plan_event handle different event types, delete tools target specific deletions, get tools retrieve different data sets, and save_strategy/whoami serve unique functions. The descriptions reinforce these distinctions, making misselection unlikely.
All tools follow a consistent verb_noun pattern (e.g., create_note_event, get_planned_events, save_strategy) with snake_case throughout. The naming is predictable and readable, with no deviations in style or convention across the set.
With 9 tools, the count is well-scoped for a STAS training/planning server. Each tool earns its place by covering distinct aspects like event management, data retrieval, and user operations, avoiding bloat while supporting core workflows effectively.
The toolset provides strong coverage for STAS's domain, including CRUD operations for notes and plans, data retrieval for events and trainings, user context, and strategy saving. A minor gap exists in updating events beyond creation/deletion, but agents can work around this by recreating events as needed.
Available Tools
9 toolscreate_note_eventBInspect
Create or update note events in Intervals. Use for STAS notes, plan explanations, or weekly review notes. Use only with external_id values starting with note:.
| Name | Required | Description | Default |
|---|---|---|---|
| events | Yes | ||
| dry_run | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool can 'Create or update' (implying mutation) and has a specific constraint on external_id format, but doesn't cover important behavioral aspects like authentication needs, rate limits, error handling, or what happens during updates versus creates. For a mutation tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that are front-loaded with the core purpose. Every sentence earns its place: the first establishes the tool's function and use cases, the second provides a critical constraint. There's zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (mutation tool with array input and nested objects), no annotations, no output schema, and low schema coverage, the description is incomplete. It lacks details on behavioral traits, most parameter meanings, return values, and error conditions. While it provides some usage context, it doesn't adequately cover what an agent needs to invoke this tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 2 parameters (one required array 'events' and one optional boolean 'dry_run'), the description adds minimal parameter semantics. It only mentions the 'external_id' parameter constraint ('starting with note:'), leaving the other parameters (including nested ones like name, dates, description) and the purpose of 'dry_run' completely unexplained. The description doesn't compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Create or update') and resource ('note events in Intervals'), making the purpose specific. It distinguishes from siblings by specifying 'note events' rather than 'plan events' or other types, though it doesn't explicitly contrast with all siblings like 'delete_note_events'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: 'for STAS notes, plan explanations, or weekly review notes' and 'only with external_id values starting with note:'. This gives specific application scenarios and a constraint, though it doesn't explicitly mention when NOT to use it or name alternatives like 'create_plan_event'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_plan_eventBInspect
Create or update workout events in Intervals. Use after reading the current plan and only for STAS plan events with external_id values starting with plan:.
| Name | Required | Description | Default |
|---|---|---|---|
| events | Yes | ||
| dry_run | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Create or update' which implies mutation, but doesn't specify permissions, side effects, error handling, or response format. For a mutation tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and efficient with two sentences that convey key information without waste. However, it could be slightly more structured by separating usage guidelines from constraints, but overall it's appropriately sized and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with 2 parameters, no output schema, and no annotations), the description is incomplete. It lacks details on behavioral traits, full parameter explanations, and output expectations, making it inadequate for safe and effective use by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only mentions 'external_id' constraints ('starting with plan:') and implies 'events' usage, but doesn't explain other parameters like 'dry_run', 'name', dates, or 'color'. This adds minimal value beyond the schema, failing to adequately cover the parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Create or update') and resource ('workout events in Intervals'), making the purpose specific. However, it doesn't explicitly differentiate from sibling tools like 'create_note_event' or 'delete_plan_events' beyond mentioning 'STAS plan events', leaving some ambiguity about sibling distinctions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'after reading the current plan' and 'only for STAS plan events with external_id values starting with plan:'. This gives explicit prerequisites and constraints, though it doesn't mention alternatives like 'create_note_event' or when not to use it, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_note_eventsBInspect
Delete STAS note events in a date window before writing replacement notes.
| Name | Required | Description | Default |
|---|---|---|---|
| newest | Yes | ||
| oldest | Yes | ||
| dry_run | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the destructive nature ('Delete') and the date-window constraint, but doesn't mention permissions needed, rate limits, whether deletions are permanent, or what happens if no events exist. For a destructive tool with zero annotation coverage, this is a significant gap in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It front-loads the core action ('Delete STAS note events') and adds necessary context ('in a date window before writing replacement notes'), making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive operation with 3 parameters), lack of annotations, and no output schema, the description is incomplete. It doesn't cover parameter details, behavioral traits like safety or permissions, or return values, leaving critical gaps for an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'date window' which relates to 'oldest' and 'newest' parameters, but doesn't explain their format (YYYY-MM-DD as per schema pattern) or meaning (inclusive/exclusive bounds). It also omits the optional 'dry_run' parameter entirely, failing to add sufficient meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Delete') and resource ('STAS note events'), and specifies the scope ('in a date window before writing replacement notes'). It distinguishes from siblings like 'delete_plan_events' by specifying 'note events' rather than 'plan events'. However, it doesn't explicitly contrast with 'create_note_event' or other tools, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('before writing replacement notes'), suggesting this tool is for cleanup or preparation. However, it lacks explicit guidance on when to use this versus alternatives like 'delete_plan_events' or 'create_note_event', and doesn't mention prerequisites or exclusions, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_plan_eventsAInspect
Delete STAS plan events in a date window before writing a replacement plan, so old workouts do not remain in the calendar.
| Name | Required | Description | Default |
|---|---|---|---|
| newest | Yes | ||
| oldest | Yes | ||
| dry_run | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly indicates this is a destructive operation ('Delete') and mentions the purpose of preventing old workouts from remaining. However, it doesn't specify authentication requirements, rate limits, error conditions, or what happens if no events exist in the date range. The mention of 'dry_run' parameter in the schema suggests a testing capability that isn't explained in the description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and follows with the usage context. Every word serves a purpose with no redundancy or unnecessary elaboration. It's appropriately sized for a tool with a clear, focused function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with no annotations and no output schema, the description provides adequate purpose and usage context but lacks important behavioral details. It doesn't explain what constitutes success/failure, whether deletions are permanent or reversible, or what the tool returns. The parameter semantics gap is significant given the 0% schema coverage. The description is complete enough for basic understanding but insufficient for confident implementation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, so the description must compensate. While it mentions 'date window' which relates to 'oldest' and 'newest' parameters, it provides no details about date format, timezone handling, or boundary inclusion. The 'dry_run' parameter isn't mentioned at all in the description, leaving its purpose undocumented. The description adds minimal semantic value beyond what's implied by parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Delete STAS plan events'), identifies the resource ('plan events'), and specifies the scope ('in a date window'). It distinguishes from siblings like 'delete_note_events' by focusing on plan events rather than note events, and from 'create_plan_event' by being a deletion rather than creation operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'before writing a replacement plan, so old workouts do not remain in the calendar.' This provides clear context about the intended workflow and distinguishes it from other tools like 'get_planned_events' (for reading) or 'create_plan_event' (for adding new events).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_planned_eventsBInspect
Read planned events from the Intervals calendar in a date window. Use this before rewriting or replacing an existing STAS plan.
| Name | Required | Description | Default |
|---|---|---|---|
| newest | Yes | ||
| oldest | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a read operation and implies it's for preparatory use, but doesn't disclose important behavioral traits like authentication requirements, rate limits, pagination, error conditions, or what format the events are returned in. The description adds minimal behavioral context beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two sentences) and front-loaded with the core purpose. The second sentence provides contextual guidance without unnecessary elaboration. However, the first sentence could be slightly more precise (e.g., specifying 'list' instead of 'read' for clarity).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with 2 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what data is returned, how events are structured, whether there are limitations on the date range, or authentication requirements. The preparatory context hint is helpful but insufficient for full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the 2 parameters, the description doesn't add any parameter-specific information beyond what's implied by 'date window'. It doesn't explain what 'oldest' and 'newest' represent, their format requirements (YYYY-MM-DD), or whether the window is inclusive/exclusive. The baseline is 3 since schema coverage is low but the description doesn't adequately compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Read planned events') and resource ('from the Intervals calendar'), specifying the scope ('in a date window'). It distinguishes from some siblings like create/delete operations but doesn't explicitly differentiate from other read tools like get_trainings or get_user_summary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some guidance ('Use this before rewriting or replacing an existing STAS plan'), which implies a preparatory context. However, it doesn't explicitly state when to use this tool versus alternatives like get_trainings or get_user_summary, nor does it provide exclusion criteria or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trainingsAInspect
Load recent workouts with pace, heart rate, sport metrics, intervals, and athlete reports. Use after the summary when analyzing load, progress, fatigue, or consistency.
| Name | Required | Description | Default |
|---|---|---|---|
| full | No | ||
| limit | No | ||
| newest | No | ||
| oldest | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It implies a read operation ('Load') but doesn't disclose behavioral traits like authentication requirements, rate limits, pagination, or error handling. The description adds some context about the data returned but lacks operational details needed for a higher score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and concise, with two sentences that efficiently convey purpose and usage guidelines. Every sentence adds value without redundancy, making it appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, 0% schema coverage, and no output schema, the description is incomplete. It covers purpose and usage well but lacks parameter details and behavioral transparency. For a tool with 4 parameters and no structured support, this is a moderate gap, making it adequate but with clear deficiencies.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions no parameters at all, failing to explain the meaning or usage of 'full', 'limit', 'newest', or 'oldest'. This leaves significant gaps, scoring below the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Load recent workouts with pace, heart rate, sport metrics, intervals, and athlete reports.' It specifies the verb ('Load') and resource ('recent workouts') with detailed content. However, it doesn't explicitly differentiate from sibling tools like 'get_planned_events' or 'get_user_summary', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage context: 'Use after the summary when analyzing load, progress, fatigue, or consistency.' This indicates when to use the tool relative to another tool ('get_user_summary'). However, it doesn't specify when NOT to use it or name alternatives among siblings, so it falls short of a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_user_summaryBInspect
Start here for most conversations. Load the athlete's profile, goals, rules, recent load, current fitness context, and planning guidance from STAS.
| Name | Required | Description | Default |
|---|---|---|---|
| section | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral disclosure. It mentions loading data from 'STAS' but doesn't describe authentication requirements, rate limits, error conditions, or what happens if the athlete doesn't exist. For a data retrieval tool with zero annotation coverage, this leaves significant gaps in understanding its operational behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences that establish primary use case and scope. The first sentence clearly states the tool's role, and the second enumerates the data components loaded. There's minimal wasted verbiage, though it could be slightly more specific about the relationship between the parameter and data returned.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 1 parameter with 0% schema description coverage, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain what data format is returned, how the optional 'section' parameter filters results, or what 'STAS' refers to. For a tool that appears to be a central data access point, more contextual information is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions loading multiple data sections but doesn't explain the 'section' parameter or its enum values. With 0% schema description coverage and 1 parameter documented only as an enum in the schema, the description fails to add meaningful semantic context about what each section value retrieves or how parameter usage affects the response.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool loads an athlete's comprehensive profile data including profile, goals, rules, recent load, fitness context, and planning guidance from STAS. It specifies the verb 'load' and resource 'athlete's profile data', though it doesn't explicitly distinguish from sibling tools like 'get_trainings' or 'get_planned_events' which might retrieve more specific subsets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context with 'Start here for most conversations', indicating this is the primary entry point for athlete data retrieval. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools for more targeted queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_strategyCInspect
Save the athlete's long-term strategy so future planning uses the updated training foundation.
| Name | Required | Description | Default |
|---|---|---|---|
| strategy_md | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool saves data for future use, implying a write operation, but lacks details on permissions, side effects, error handling, or response format. This is inadequate for a mutation tool with zero annotation coverage, as critical behavioral traits are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy and wastes no words, though it could be slightly more structured by separating usage context. Overall, it's appropriately concise for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity as a mutation operation with no annotations, no output schema, and low parameter coverage, the description is incomplete. It doesn't address behavioral aspects like side effects, return values, or error conditions, leaving significant gaps for the agent to operate safely and effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for the single undocumented parameter. It mentions 'strategy' and 'training foundation', which loosely relates to 'strategy_md', but doesn't explain the parameter's meaning, format (e.g., markdown as implied by '_md'), or constraints beyond the schema's minLength. The description adds minimal semantic value, meeting the baseline for low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('save') and resource ('athlete's long-term strategy'), with a specific purpose ('so future planning uses the updated training foundation'). It distinguishes from siblings like create_note_event or create_plan_event by focusing on strategy rather than events. However, it doesn't explicitly contrast with all siblings, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for saving long-term strategies to influence future planning, but provides no explicit guidance on when to use this tool versus alternatives like create_plan_event or other siblings. There are no exclusions, prerequisites, or comparisons mentioned, leaving the agent to infer context without clear direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whoamiAInspect
Check which STAS user is currently authenticated. Use only for diagnostics or reconnect troubleshooting.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It effectively discloses the tool's purpose and usage context, but lacks details on behavioral traits such as rate limits, authentication requirements, or error handling. However, it does not contradict any annotations, and the context provided is useful for understanding its diagnostic role.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, consisting of only two sentences that directly state the purpose and usage guidelines. Every sentence adds clear value without redundancy, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema), the description is largely complete for its diagnostic purpose. It covers what the tool does and when to use it, but could benefit from additional context on output format or error cases. However, for a simple authentication check tool, this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description does not need to add parameter details, but it implicitly confirms this by not mentioning any inputs, aligning with the schema. A baseline of 4 is appropriate for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Check which STAS user is currently authenticated.' It uses a specific verb ('Check') and identifies the resource ('STAS user'), clearly distinguishing it from sibling tools like create_note_event or get_user_summary, which involve different operations and resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use only for diagnostics or reconnect troubleshooting.' This clearly defines the intended context and excludes other use cases, helping the agent avoid misuse in favor of alternative tools for general user operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!