Skip to main content
Glama

Server Details

Stress recovery protocols via Lightning payments: post-concussion, post-COVID, burnout

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
reitsmasieto-wq/overcome-stress-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
assessA
Read-onlyIdempotent
Inspect

Assess client symptoms and recommend Corpus Systemics therapy content for post-concussion, post-COVID, burnout or chronic stress.

ParametersJSON Schema
NameRequiredDescriptionDefault
symptomsYesDescription of client symptoms, e.g. 'headaches, fatigue, brain fog, difficulty concentrating, anxiety after concussion or burnout.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true, covering safety and side-effect profile. The description adds valuable domain context (the four specific conditions treated) and implies the return value (therapy recommendations), but does not specify response format, structure, or whether recommendations are persisted versus calculated on-the-fly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with action verbs ('Assess... and recommend'), every word earns its place by defining scope, domain, and target conditions without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema and presence of safety annotations, the description adequately covers the tool's purpose and domain. It implicitly addresses the missing output schema by stating the tool 'recommend[s]... therapy content', though it could explicitly describe the recommendation format or structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'symptoms' parameter, the schema carries the definitional burden. The description adds domain framing (linking symptoms to the specific conditions) but does not add parameter syntax, validation rules, or format requirements beyond the schema's example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Assess', 'recommend') and identifies the exact resource ('Corpus Systemics therapy content'). It clearly distinguishes from transactional siblings like 'buy_skill' and 'buy_trajectory' by positioning this as a diagnostic/recommendation entry point rather than a purchase action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It specifies target conditions (post-concussion, post-COVID, burnout, chronic stress) providing implicit context for when to use the tool. However, it lacks explicit guidance on when NOT to use it or how it relates to sibling tools like 'catalog' (browse vs assess) or the 'buy' tools (assess first, then buy).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

buy_skillAInspect

Purchase a single therapy skill via Bitcoin Lightning Network (25-100 satoshis). Returns a Lightning invoice for immediate payment.

ParametersJSON Schema
NameRequiredDescriptionDefault
skill_idYesSkill identifier from catalog, e.g. 'K01' for stress basics, 'K02' for autonomic nervous system regulation.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (idempotentHint: false, destructiveHint: false), the description adds substantial behavioral context: specific cost range (25-100 satoshis), payment network (Bitcoin Lightning), return type (Lightning invoice), and payment urgency (immediate). This effectively characterizes the transactional nature of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two information-dense sentences with zero redundancy. The first sentence captures the action, cost, and payment method; the second specifies the return value. Every word serves a purpose and critical constraints (satoshi range, Lightning Network specificity) are front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately specifies the return value (Lightning invoice) and explains the payment flow. For a cryptocurrency purchasing operation, it covers essential elements (cost, mechanism, return), though it could clarify what happens to the skill content after payment confirmation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the skill_id parameter fully documented including examples (K01, K02). The description mentions 'therapy skill' which contextualizes the parameter, but adds no additional syntax guidance or validation rules beyond what the schema already provides. Baseline 3 is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (purchase), resource (single therapy skill), and mechanism (Bitcoin Lightning Network). The word 'single' effectively distinguishes this from the sibling 'buy_trajectory' tool, while contrasting with 'catalog' (listing) and 'assess' (evaluation).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies scope limitation ('single' vs presumably multi-skill trajectory), but provides no explicit guidance on when to use 'buy_trajectory' instead, or that 'catalog' should be called first to obtain valid skill_ids. Usage patterns must be inferred from naming conventions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

buy_trajectoryAInspect

Purchase a complete 12-week Corpus Systemics recovery trajectory via Bitcoin Lightning Network (150 satoshis). Returns a Lightning invoice.

ParametersJSON Schema
NameRequiredDescriptionDefault
trajectory_idYesTrajectory identifier from catalog, e.g. 'T1' for post-concussion recovery, 'T2' for burnout, 'T3' for post-COVID.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong supplemental disclosure beyond annotations: specifies payment mechanism (Bitcoin Lightning Network), exact cost (150 satoshis), duration (12-week), and return type (Lightning invoice). Annotations only indicate non-destructive/non-idempotent nature; description carries the financial transaction burden effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes action, resource, method, and cost; second specifies return value. Front-loaded with essential transaction details. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, description compensates by specifying return value (Lightning invoice). With 100% input schema coverage and clear behavioral scope, adequately complete for a payment tool. Minor gap regarding post-purchase activation flow or error states.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with excellent parameter documentation (including examples T1/T2/T3). Description mentions 'trajectory' generically but does not add syntax, format constraints, or semantic details beyond what the schema already provides. Baseline 3 appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Purchase' + resource '12-week Corpus Systemics recovery trajectory' + domain-specific details (Bitcoin Lightning Network, 150 satoshis). Clearly distinguishes from sibling 'buy_skill' by specifying trajectory-type purchase with duration and specific payment amount.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States what the tool does and returns, but lacks explicit when-to-use guidance versus siblings (e.g., when to use 'buy_trajectory' vs 'buy_skill' or whether to 'preview' first). Usage is implied by the naming and domain specificity rather than explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

catalogA
Read-onlyIdempotent
Inspect

Browse the complete Corpus Systemics therapy content catalog. Returns available skills (25-100 sats) and 12-week recovery trajectories (150 sats) for post-concussion, post-COVID, burnout and chronic stress.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoFilter by category: 'skills' for individual exercises or 'trajectories' for 12-week programs.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety, while the description adds substantial domain context: pricing structure (25-100 sats vs 150 sats), categorical taxonomy (individual exercises vs 12-week programs), and therapeutic domains. This helps the agent understand the catalog's structure and cost logic, though it omits pagination or rate limit details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with zero waste. Front-loaded with the action verb ('Browse'), followed by scope ('complete... catalog'), and specific return value details (pricing and content types). Every clause earns its place by conveying distinct catalog attributes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a low-complexity tool (1 optional param, no nesting). Despite lacking a formal output schema, the description explicitly documents return values (skills/trajectories with pricing tiers), compensating adequately for the missing output specification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds semantic value by explaining the practical distinction between 'skills' (25-100 sats) and 'trajectories' (150 sats), providing pricing context that helps the agent understand the significance of the category filter values beyond the schema's basic definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Browse') and resource ('Corpus Systemics therapy content catalog'), clearly distinguishing it from transactional siblings like buy_skill/buy_trajectory. It specifies the exact content types returned (skills vs. 12-week trajectories) and target medical conditions (post-concussion, post-COVID, burnout, chronic stress), leaving no ambiguity about scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context by stating what the tool returns (available skills and trajectories with pricing), which implicitly signals when to use it (for discovery/directory lookup). Distinguishes from 'buy' siblings by emphasizing browsing/returning available items rather than purchasing. Lacks explicit 'when not to use' guidance or direct comparison to 'preview' sibling, preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

previewA
Read-onlyIdempotent
Inspect

Retrieve a free preview of a therapy trajectory before purchase. Returns first session content and expected outcomes.

ParametersJSON Schema
NameRequiredDescriptionDefault
trajectory_idYesTrajectory identifier from catalog, e.g. 'T1' for post-concussion, 'T2' for burnout, 'T3' for post-COVID.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context not in annotations: explicitly states the preview is 'free' (cost behavior) and details what content is returned ('first session content and expected outcomes'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads action, resource, cost ('free'), and workflow position ('before purchase'). Second sentence efficiently covers return value. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read tool with 100% schema coverage and good annotations, the description adequately covers purpose and return values (compensating for missing output schema). Could improve by mentioning error conditions (e.g., invalid trajectory_id) or authentication requirements, but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the trajectory_id fully documented including examples (T1, T2, T3). The description does not add additional parameter semantics, but with complete schema coverage, the baseline of 3 is appropriate as no compensation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Retrieve' with resource 'preview of a therapy trajectory' and scopes it with 'before purchase'. The phrase 'before purchase' effectively distinguishes this from the sibling 'buy_trajectory' tool by establishing its position in the user workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear temporal context 'before purchase' implying this is a pre-purchase sampling step. While it doesn't explicitly name 'buy_trajectory' as the alternative, the 'before purchase' framing creates an implicit when-to-use guideline. Lacks explicit 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.