library
Server Details
Clinician-reviewed library on social anxiety, panic, and school refusal in adolescents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 6 of 6 tools scored.
Each tool targets a distinct purpose: citation formatting, article retrieval, crisis resources, site metadata, article listing, and full-text search. No overlaps or ambiguity.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., cite_article, get_article, list_articles), making them predictable and easy to understand.
With 6 tools, the set is well-scoped for a library/information server, covering core operations without unnecessary bloat or missing essential functions.
The tool surface covers the full read lifecycle for articles (list, search, get, cite) plus site context and crisis resources. No obvious gaps for the stated purpose of a pediatric anxiety resource.
Available Tools
6 toolscite_articleARead-onlyInspect
Return formatted citation strings (AMA, APA, Chicago) for an article slug. Useful when an agent needs a verifiable source line.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug. | |
| format | No | Citation format. Default: ama. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint: true, so the description's addition of returning formatted citation strings adds some behavioral context. However, it does not discuss error cases or permissions, which is acceptable given the annotation coverage, but the description adds limited value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: one functional statement and one usage hint. It is front-loaded, concise, and contains no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool with only two parameters and no output schema, the description adequately explains the purpose and output type (formatted citation strings). However, it could be clearer about the return structure (e.g., returns a single string per format) and does not cover error scenarios, but for this tool, that is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds context by mentioning 'article slug' and listing the citation formats, but the schema already describes both parameters adequately. The description does not significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'return formatted citation strings (AMA, APA, Chicago) for an article slug,' providing a specific verb and resource. This distinguishes it from sibling tools like get_article or search_articles, which handle article retrieval or listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Useful when an agent needs a verifiable source line' gives clear context for when to use this tool. It implicitly distinguishes from siblings that retrieve article content or metadata, though it does not explicitly list when not to use alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_articleARead-onlyInspect
Fetch a single article by slug — full intro, body, FAQ, references, embedded reviewers + authors with credentials, and pre-formatted citation strings (AMA, APA, Chicago).
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug, e.g. "what-an-evaluation-actually-looks-like". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds detail about response content, which is useful but does not disclose other behavioral traits like rate limits or error handling. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action ('Fetch'), and efficiently lists included content. No unnecessary words; every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While there is no output schema, the description lists all expected content (intro, body, FAQ, etc.), making the return format clear. For a simple fetch tool, this is largely complete, though error handling or pagination is not mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers the 'slug' parameter 100%, but the description provides a concrete example ('what-an-evaluation-actually-looks-like'), adding clarity beyond the schema. For a single parameter, this is helpful.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches a single article by slug, listing specific content (intro, body, FAQ, references, credentials, citations). This differentiates from sibling tools like list_articles (multiple) and search_articles (filtered search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied—when you need a single article by slug. However, there is no explicit guidance on when not to use it or alternatives (e.g., cite_article for citations, search_articles for finding articles).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crisis_resourcesARead-onlyInspect
Returns the canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Call any time the user mentions self-harm, suicidal ideation, or someone else in danger. Hardcoded — does not vary by microsite.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the payload is 'Hardcoded — does not vary by microsite,' adding behavioral context beyond the readOnlyHint annotation. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words: first sentence states purpose and content, second sentence gives usage trigger and behavior. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose, usage, and behavior. However, it omits the return structure or format of the payload, which might be useful for the agent. Given no output schema, a slight gap exists, but the listed resources partially compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so the description need not add parameter info. The 0-parameter schema has full coverage, baseline is 4, and the description does not mislead.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a canonical crisis-resource payload with specific resources (911, 988, Crisis Text Line), and the action 'Returns' combined with the resource list distinguishes it from sibling tools like cite_article or search_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly specifies when to call: 'any time the user mentions self-harm, suicidal ideation, or someone else in danger.' This provides clear context for selection, though it doesn't mention alternatives; the sibling tools are irrelevant for crisis scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_microsite_infoARead-onlyInspect
Identity, audience, focus, sponsor relationship, crisis routing, and links for Anxiety in Teens. Always safe to call when the agent needs site-level context.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds that it is 'always safe to call' and lists the information types, which provides some behavioral context beyond the annotation, but not extensively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each serving a purpose: first lists contents, second states safety and usage. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no parameters and no output schema, the description fully explains what the tool provides and when to use it. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so baseline 4 is appropriate. The description does not need to add parameter meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool provides identity, audience, focus, sponsor relationship, crisis routing, and links for Anxiety in Teens, and it distinguishes from sibling tools (articles, crisis resources) by being site-level context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Always safe to call when the agent needs site-level context', providing clear guidance on when to use it. It does not explicitly mention alternatives, but the context makes it sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_articlesARead-onlyInspect
Paginated list of all native articles on this microsite (clinician-reviewed). Returns lightweight summaries — call get_article for full body.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1). | |
| limit | No | Page size (default 30, max 100). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark as readOnlyHint=true. Description adds that results are paginated and lightweight summaries, which is useful behavioral info beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently convey purpose, scope, and relationship to sibling. No redundancy or extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameter set with good annotations, the description covers purpose, pagination, summary nature, and cross-reference to get_article. Missing details like ordering but sufficient given sibling search_articles handles filtering.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters with descriptions. Description mentions pagination but adds no new detail on usage or constraints beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all native articles (clinician-reviewed) with summaries, and directs to get_article for full body. It distinguishes from siblings get_article and search_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly suggests using get_article for full content, but does not reference search_articles or cite_article for filtering or citations. Still provides clear when-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_articlesARead-onlyInspect
Full-text search of clinician-reviewed pediatric anxiety articles published on Anxiety in Teens, ranked by relevance. Use to find guidance for teenagers and their parents.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50). | |
| query | Yes | Free-text query. Matches title and summary. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and description confirms read-only operation. Adds context about ranking and peer-review, which is helpful beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key action and context, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and description does not specify return fields or structure, leaving the agent without important information about results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds no extra meaning beyond what parameters already state. Baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb (search), resource (articles), scope (clinician-reviewed pediatric anxiety, ranked by relevance, for teens and parents). Distinguishes from siblings like list_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for finding guidance but lacks explicit when-to-use vs alternatives (e.g., list_articles, get_article). No exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!