library
Server Details
Clinician-reviewed library on anxiety, OCD, and phobias in children ages 5–12.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 6 of 6 tools scored.
Each tool has a clear, distinct purpose: article retrieval, citation output, crisis resources, site info, listing, and search. No overlap.
All tool names follow the verb_noun snake_case pattern consistently (e.g., cite_article, get_article).
6 tools cover the library's scope well—enough for article management and site context without being excessive.
The set provides a complete read-only surface: search, list, get, cite, plus site and crisis info. No obvious gaps for a reference library.
Available Tools
6 toolscite_articleARead-onlyInspect
Return formatted citation strings (AMA, APA, Chicago) for an article slug. Useful when an agent needs a verifiable source line.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug. | |
| format | No | Citation format. Default: ama. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, and the description is consistent with a read operation. No additional behavioral details (like rate limits or auth) are provided, but none are needed given the tool's simplicity and the annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The key action and purpose are front-loaded: 'Return formatted citation strings...' followed by usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with 2 params and no output schema, the description covers the core action and usage context. It could mention the return format (likely a single string), but it's mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions. The description adds minimal extra meaning beyond the schema, only implying the slug is for an article. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns formatted citation strings in AMA, APA, or Chicago formats. It explicitly mentions the input 'article slug' and distinguishes from sibling tools like get_article or search_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Useful when an agent needs a verifiable source line,' which gives clear context for when to use. However, it does not explicitly mention when not to use or suggest alternative tools, but the context is sufficient for a simple tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_articleARead-onlyInspect
Fetch a single article by slug — full intro, body, FAQ, references, embedded reviewers + authors with credentials, and pre-formatted citation strings (AMA, APA, Chicago).
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug, e.g. "what-an-evaluation-actually-looks-like". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the read-only nature is known. The description adds value by detailing the response structure (full intro, body, FAQ, etc.), which aids the agent in understanding what to expect without needing an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently communicates the tool's purpose and output. It is front-loaded with the action and resource. Slightly verbose due to listing all returned elements, but every phrase carries meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter set (one slug) and lack of output schema, the description thoroughly covers what the tool does and what the response includes. No gaps remain for an agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter, slug, is well-documented in the schema with a clear example. The description does not add any extra semantic information beyond what the schema provides. Schema coverage is 100%, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches a single article by slug and enumerates specific content returned (intro, body, FAQ, references, authors, citations). It distinguishes from siblings like list_articles and search_articles which operate on collections, and cite_article which focuses on citations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (fetching a specific article by slug) but does not explicitly state when not to use or compare with alternatives. Sibling names provide context, but the description lacks direct guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crisis_resourcesARead-onlyInspect
Returns the canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Call any time the user mentions self-harm, suicidal ideation, or someone else in danger. Hardcoded — does not vary by microsite.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
ReadOnlyHint already set; description adds that result is hardcoded and lists specific resources, enhancing transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Front-loaded with purpose and usage, then clarifying detail. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully covers purpose, usage context, and behavioral traits given zero parameters and no output schema. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters, baseline score of 4 applies. Description implicitly confirms no parameters needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Directly states it returns canonical crisis resources (911, 988, Crisis Text Line). Distinct from sibling article tools, providing clear purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs when to call: 'any time the user mentions self-harm, suicidal ideation, or someone else in danger.' Also clarifies it is hardcoded.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_microsite_infoARead-onlyInspect
Identity, audience, focus, sponsor relationship, crisis routing, and links for Anxiety in Children. Always safe to call when the agent needs site-level context.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark it as readOnlyHint true. The description adds 'Always safe to call,' reinforcing that it has no side effects. No additional behavioral details (e.g., rate limits, auth) are provided, but the annotation covers the core safety aspect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. It front-loads the content and adds usage guidance efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with no parameters and no output schema, the description covers the purpose and context well. It lists specific information provided but does not describe the return format, which is acceptable given the low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (100% coverage), so baseline is 4. The description does not need to add parameter info since none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool provides 'Identity, audience, focus, sponsor relationship, crisis routing, and links for Anxiety in Children,' clearly identifying the resource and scope. The phrase 'Always safe to call when the agent needs site-level context' distinguishes it from sibling tools that deal with articles or crisis resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides a clear usage condition: 'Always safe to call when the agent needs site-level context.' However, it does not explicitly mention when not to use it or compare with alternatives, though the sibling names are distinct enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_articlesARead-onlyInspect
Paginated list of all native articles on this microsite (clinician-reviewed). Returns lightweight summaries — call get_article for full body.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1). | |
| limit | No | Page size (default 30, max 100). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already set readOnlyHint=true, and description adds value by mentioning 'clinician-reviewed', pagination, and lightweight summaries, setting expectations beyond the schema and annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with front-loaded purpose, no wasted words. Every sentence provides unique information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (2 params, 100% coverage, no output schema), description adequately covers purpose, behavior, and alternatives, leaving no obvious gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description mentions pagination but does not add new meaning to page/limit parameters beyond what schema describes. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Paginated list of all native articles on this microsite' with clear verb and resource, and distinguishes from sibling tool 'get_article' by noting it returns lightweight summaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides context that this tool is for browsing and suggests 'call get_article for full body' as an alternative, guiding when to use each, though lacks explicit when-not-to-use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_articlesARead-onlyInspect
Full-text search of clinician-reviewed pediatric anxiety articles published on Anxiety in Children, ranked by relevance. Use to find guidance for parents and caregivers of children.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50). | |
| query | Yes | Free-text query. Matches title and summary. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds behavioral context beyond the readOnlyHint annotation by stating the search is full-text and 'ranked by relevance.' This informs the agent about ranking behavior. It does not contradict annotations, so score is high.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences that are directly relevant and front-loaded: first sentence states what it does, second states when to use. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with 2 well-documented parameters and no output schema, the description covers the source, scope, ranking, and purpose. It could mention the return format (list of articles), but it's implied. Nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema; it repeats 'full-text search' which aligns with the query parameter description. No additional parameter context is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'search', the resource 'clinician-reviewed pediatric anxiety articles', and the context 'on Anxiety in Children' with a clear purpose 'to find guidance for parents and caregivers'. This clearly distinguishes it from sibling tools like get_article (single article retrieval) or list_articles (listing without search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states 'Use to find guidance for parents and caregivers of children,' which provides a use case but does not explicitly contrast with sibling tools or mention when not to use it. The guidance is implicit, not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!