library
Server Details
Teen-first library on CBT, DBT, and finding the right therapist.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 6 of 6 tools scored.
All six tools have clearly distinct purposes: citations, full article retrieval, crisis resources, site info, listing, and search. No overlap or confusion between them.
All tool names follow the verb_noun snake_case pattern (e.g., cite_article, get_article, list_articles), providing a predictable and consistent naming convention.
With 6 tools, the set is well-scoped for a library microsite—covering essential operations (list, search, get) plus auxiliary needs (citations, crisis resources, site info) without being too sparse or overwhelming.
The tool surface completely covers the read-only domain of a therapy article library: listing, searching, retrieving full articles, formatting citations, providing crisis resources, and site context. No obvious gaps for its intended use.
Available Tools
6 toolscite_articleARead-onlyInspect
Return formatted citation strings (AMA, APA, Chicago) for an article slug. Useful when an agent needs a verifiable source line.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug. | |
| format | No | Citation format. Default: ama. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark readOnlyHint=true, so the description's additional context about citation formats is sufficient. No contradictions or missing disclosures.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description adequately covers purpose, usage, and parameter defaults. Could be improved by specifying return format (string), but still complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds the default format value 'ama' which is not in the schema, enhancing parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Return' and resource 'formatted citation strings' for an article slug, and distinguishes it from sibling tools like get_article or search_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Useful when an agent needs a verifiable source line,' which implies context but does not explicitly state when not to use or mention alternatives like direct article retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_articleARead-onlyInspect
Fetch a single article by slug — full intro, body, FAQ, references, embedded reviewers + authors with credentials, and pre-formatted citation strings (AMA, APA, Chicago).
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug, e.g. "what-an-evaluation-actually-looks-like". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint=true, and the description adds valuable behavioral context about the response content (full intro, body, FAQ, references, authors with credentials, citation strings), which helps the agent understand the rich data returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that front-loads the action and lists all response components efficiently without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description effectively enumerates the expected response fields. It is complete for its purpose, though it omits error handling or edge cases (e.g., slug not found).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter 'slug', with a clear example in the description. The description itself does not add additional semantic meaning beyond 'by slug'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Fetch' and the resource 'article by slug', clearly distinguishing it from siblings like 'list_articles' (multiple articles) and 'search_articles' (search-based retrieval).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for fetching a single article with full details, but it does not explicitly state when to use this tool versus alternatives like 'cite_article' or what prerequisites exist (e.g., knowing the slug).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crisis_resourcesARead-onlyInspect
Returns the canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Call any time the user mentions self-harm, suicidal ideation, or someone else in danger. Hardcoded — does not vary by microsite.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and the description confirms non-destructive behavior. It adds value by noting 'Hardcoded — does not vary by microsite,' which is extra behavioral context beyond the annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three focused sentences: purpose, usage, and behavioral note. No redundant words, front-loaded with core information. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and read-only annotation, description covers all necessary aspects: what it returns, when to use it, and that it's hardcoded. Fully complete for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has no parameters (0 required, 100% coverage). With zero parameters, baseline is 4; description adds no param info but none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description starts with 'Returns the canonical crisis-resource payload' listing specific resources (911, lifeline, text line), establishing a clear verb-resource pair. It is distinct from sibling tools like cite_article or get_article which handle articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Call any time the user mentions self-harm, suicidal ideation, or someone else in danger.' This provides direct guidance on usage context, effectively distinguishing from other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_microsite_infoARead-onlyInspect
Identity, audience, focus, sponsor relationship, crisis routing, and links for Therapy for Teens. Always safe to call when the agent needs site-level context.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so description adds no safety insight beyond that. It lists returned data types but does not disclose behaviors like caching, latency, or response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences with front-loaded list of data types. Every word is informative; no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Though no output schema, description covers the main data categories. It does not specify format (e.g., single object vs. list) but suffices for basic understanding given zero parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. Baseline is 4. Description adds value by listing the categories of information returned, compensating for lack of output schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool provides 'Identity, audience, focus, sponsor relationship, crisis routing, and links for Therapy for Teens,' which is specific and distinguishes it from sibling tools that handle article operations or crisis resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description says 'Always safe to call when the agent needs site-level context,' providing clear when-to-use guidance. It does not explicitly state when not to use or mention alternatives, but context from siblings implies it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_articlesARead-onlyInspect
Paginated list of all native articles on this microsite (clinician-reviewed). Returns lightweight summaries — call get_article for full body.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1). | |
| limit | No | Page size (default 30, max 100). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds value beyond the 'readOnlyHint' annotation by specifying that results are paginated, return lightweight summaries, and are clinician-reviewed. There is no contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences, each adding essential information. The first sentence states the purpose and scope, the second clarifies the output and directs to a sibling tool. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with comprehensive schema and annotations, the description is complete. It explains the nature of the results (lightweight summaries) and directs to the sibling for full details. No output schema is needed as the description covers the return value adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides descriptions for both parameters (page and limit) with 100% coverage. The description mentions 'paginated' which hints at these parameters but does not add new semantic information beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action (list), the resource (articles), and the scope (native articles on this microsite, clinician-reviewed). It also distinguishes itself from the sibling tool 'get_article' by noting that it returns lightweight summaries, not full bodies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates when to use this tool (to get a paginated list of summaries) and when to use the sibling tool ('call get_article for full body'). It implies context for usage but does not explicitly state when not to use it or provide alternative scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_articlesARead-onlyInspect
Full-text search of clinician-reviewed pediatric psychotherapy articles published on Therapy for Teens, ranked by relevance. Use to find guidance for teenagers and their parents.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50). | |
| query | Yes | Free-text query. Matches title and summary. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark readOnlyHint=true, so description adds value by specifying 'full-text search' and 'ranked by relevance', as well as the context of clinician-reviewed articles for teenage psychotherapy. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences with no redundancy. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema is provided, and the description only says 'ranked by relevance' without explaining the return format (e.g., list of article IDs, titles, or snippets). For a search tool, this is a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. The description mentions 'full-text search' and 'relevance ranking' but does not add detail about parameter semantics beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'full-text search' of 'clinician-reviewed pediatric psychotherapy articles' ranked by relevance, which distinctively separates it from sibling tools like list_articles (likely for browsing) and get_article (single article retrieval).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use to find guidance for teenagers and their parents' but does not explicitly clarify when to use search over list_articles or other siblings, nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!