library
Server Details
Open-access rating scales and clinical references for adolescent psychiatry clinicians.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 6 of 6 tools scored.
Each tool targets a distinct function: citing, fetching, listing, searching, crisis resources, and site info. No functional overlap.
All tool names follow a consistent verb_noun pattern (e.g., cite_article, list_articles), all lowercase with underscores.
6 tools is a well-scoped set for a library resource, covering essential operations without excess.
The tool surface covers browsing, searching, retrieving full articles, citing, and providing contextual resources, which fully addresses the library domain.
Available Tools
6 toolscite_articleARead-onlyInspect
Return formatted citation strings (AMA, APA, Chicago) for an article slug. Useful when an agent needs a verifiable source line.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug. | |
| format | No | Citation format. Default: ama. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations include readOnlyHint=true, and the description confirms it returns formatted strings, aligning with a read operation. No additional behavioral traits (e.g., side effects or rate limits) are needed given the tool's simplicity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the purpose and usage guidance, with no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With only two parameters, no output schema, and readOnlyHint provided, the description sufficiently covers the tool's purpose and usage. It is complete for this simple operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description reiterates the formats listed in the enum without adding new meaning. The baseline for high schema coverage is 3, and the description provides no extra insight beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns formatted citation strings for an article slug, specifying the formats (AMA, APA, Chicago). This distinguishes it from sibling tools like get_article (which retrieves article content) and search_articles (which finds articles).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Useful when an agent needs a verifiable source line,' indicating the appropriate context. However, it does not explicitly state when not to use it or compare to alternatives, though the sibling list provides context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_articleARead-onlyInspect
Fetch a single article by slug — full intro, body, FAQ, references, embedded reviewers + authors with credentials, and pre-formatted citation strings (AMA, APA, Chicago).
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug, e.g. "what-an-evaluation-actually-looks-like". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations give readOnlyHint: true, which aligns with fetch operation. Description adds detail on returned content (authors with credentials, citation formats) beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action, includes all key response elements without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description fully covers return fields (body, FAQ, references, authors, citations). For a simple fetch tool, it is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter (slug) with full schema description (100% coverage). Description does not add new semantic info beyond schema, but baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Fetch a single article by slug' and enumerates included fields (intro, body, FAQ, references, reviewers, citation strings). This distinguishes it from siblings like list_articles and search_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use when you need a complete article given a slug. It does not explicitly state when not to use, but the context signals and sibling names provide implicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crisis_resourcesARead-onlyInspect
Returns the canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Call any time the user mentions self-harm, suicidal ideation, or someone else in danger. Hardcoded — does not vary by microsite.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds that payload is hardcoded and does not vary by microsite, providing behavioral insight beyond the readOnlyHint annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: first states purpose, second gives usage and property. No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully describes input (none), behavior (hardcoded), and output content. No output schema needed given simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist in schema, so baseline 4 applies. Description doesn't need to add parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns the canonical crisis-resource payload with specific resources (911, 988, Crisis Text Line). Distinguishes from siblings which are content/article-related.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs when to call: when user mentions self-harm, suicidal ideation, or someone else in danger. No alternative tools needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_microsite_infoARead-onlyInspect
Identity, audience, focus, sponsor relationship, crisis routing, and links for Psychiatry for Teens. Always safe to call when the agent needs site-level context.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true; the description adds 'Always safe to call' but does not provide additional behavioral details. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first specifies content, second gives usage rule. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description adequately explains what the tool returns and when to use it. Could mention return format but not necessary for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist (0 params, 100% schema coverage), so the baseline for a no-parameter tool is 4. Description does not need to add parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns site-level context (identity, audience, focus, etc.) for 'Psychiatry for Teens', which distinguishes it from siblings that deal with articles or crisis resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Always safe to call when the agent needs site-level context', providing clear when-to-use guidance, though it does not explicitly name alternatives for article or crisis needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_articlesARead-onlyInspect
Paginated list of all native articles on this microsite (clinician-reviewed). Returns lightweight summaries — call get_article for full body.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1). | |
| limit | No | Page size (default 30, max 100). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true. Description adds that results are paginated and summaries are lightweight, which is helpful context. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Purpose front-loaded, very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tools, readOnly annotation, and no output schema, description covers key aspects: listing, pagination, summaries. Could mention ordering but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage of parameters with descriptions. Description does not add further semantics beyond what the schema provides, so baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'list', resource 'articles', and scope 'native on this microsite (clinician-reviewed)'. Differentiates from get_article by noting it returns lightweight summaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly directs to 'call get_article for full body', providing when-not-to-use guidance. Implies use for browsing summaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_articlesARead-onlyInspect
Full-text search of clinician-reviewed pediatric psychiatry articles published on Psychiatry for Teens, ranked by relevance. Use to find guidance for teenagers and their parents.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50). | |
| query | Yes | Free-text query. Matches title and summary. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, and description adds that results are 'ranked by relevance' and content is 'clinician-reviewed', which provides useful behavioral context beyond the annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences covering purpose and usage context with no redundant information. Front-loaded with the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description could specify return format (e.g., titles, summaries). However, as a search tool, the result nature is somewhat implied and annotations clarify safety.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters (query, limit) with descriptions. Description adds minor value by noting relevance ranking, but does not significantly enhance understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it performs full-text search of clinician-reviewed articles, specifying the resource and context. Distinguishes from siblings like list_articles (which likely lists all) and get_article (retrieves single article).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Suggests use for 'finding guidance for teenagers and their parents', but does not explicitly list when to avoid this tool or mention alternatives like list_articles for browsing without search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!