library
Server Details
Adolescent psychiatry library focused on the medication decisions parents wrestle with.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 6 of 6 tools scored.
Each tool has a distinct purpose: citations, full articles, crisis resources, site info, article summaries, and search. No overlap exists.
All tool names follow a consistent verb_noun snake_case pattern (e.g., list_articles, search_articles), making them predictable.
Six tools cover the essential operations for a library server without being excessive or insufficient.
The tool set covers retrieval, search, citation generation, and site context. Missing create/update/delete operations are acceptable for a read-only library, but a browse-by-category tool could be a minor gap.
Available Tools
6 toolscite_articleARead-onlyInspect
Return formatted citation strings (AMA, APA, Chicago) for an article slug. Useful when an agent needs a verifiable source line.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug. | |
| format | No | Citation format. Default: ama. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and description consistently states 'Return'. No additional behavioral details (e.g., error handling, rate limits) beyond what annotations provide. Adequate but not enriched.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loading the action and usage context. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with 2 parameters and no output schema, the description covers purpose and usage adequately. Could mention that return value is a string, but not necessary given the clear context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. Description mentions 'article slug' and formats, but adds minimal meaning beyond schema (enum labels are already listed). Meets baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Return' and specific resource 'formatted citation strings' with explicit formats (AMA, APA, Chicago). Distinguishes from sibling tools like get_article which returns article content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'Useful when an agent needs a verifiable source line', providing clear context. Lacks explicit exclusions or alternatives (e.g., 'use get_article for full content'), but enough to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_articleARead-onlyInspect
Fetch a single article by slug — full intro, body, FAQ, references, embedded reviewers + authors with credentials, and pre-formatted citation strings (AMA, APA, Chicago).
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Article slug, e.g. "what-an-evaluation-actually-looks-like". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds behavioral context about the returned data (e.g., pre-formatted citation strings, embedded authors), which goes beyond the annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, compact, front-loaded sentence listing all key contents without extraneous words. Every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description comprehensively lists what the tool returns (intro, body, FAQ, references, authors, citations), giving enough context for an agent to understand the output shape. Minor gap: no mention of pagination or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the schema already describes the slug parameter clearly. The description only reaffirms 'by slug' without adding new constraints or format details, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch a single article by slug' and enumerates all included components (intro, body, FAQ, references, authors with credentials, citation strings), distinguishing it from siblings like list_articles and search_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by detailing the comprehensive content, but does not explicitly state when to use this tool over alternatives like cite_article or list_articles. No exclusionary criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crisis_resourcesARead-onlyInspect
Returns the canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Call any time the user mentions self-harm, suicidal ideation, or someone else in danger. Hardcoded — does not vary by microsite.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations mark readOnlyHint=true. Description adds that the payload is 'hardcoded — does not vary by microsite,' which is useful behavioral context beyond annotations. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first describes payload, second gives usage and property. No wasted words, very concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description lists the included resources (911, 988, Crisis Text Line), which is complete for a simple read tool. Usage guidance covers all needed context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so description doesn't need to explain them. It focuses on output content, which is sufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns specific crisis resources (911, 988, Crisis Text Line) and provides exact use cases (self-harm, suicidal ideation, danger). Distinct from sibling tools about articles and microsite info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to call: 'any time the user mentions self-harm, suicidal ideation, or someone else in danger.' Also notes it's hardcoded, implying no microsite variation. Could benefit from mentioning when not to use, but domain makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_microsite_infoARead-onlyInspect
Identity, audience, focus, sponsor relationship, crisis routing, and links for Psychiatry for Teens. Always safe to call when the agent needs site-level context.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint: true; the description adds reassurance by stating 'Always safe to call', aligning with the annotation without adding contradictory or misleading information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with key information, efficiently conveying tool purpose and safety.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers tool type and content for context, but given the absence of an output schema, additional detail on response structure would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With no parameters, the description adds significant value by detailing the content categories returned (identity, audience, etc.), exceeding the empty schema's minimal information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly lists the information categories (Identity, audience, focus, sponsor relationship, crisis routing, links) and scopes it to 'Psychiatry for Teens', clearly distinguishing it from sibling tools like get_article or list_articles which serve different content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states it is 'Always safe to call when the agent needs site-level context', providing clear use context but lacks explicit when-not-to-use or direct comparison to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_articlesARead-onlyInspect
Paginated list of all native articles on this microsite (clinician-reviewed). Returns lightweight summaries — call get_article for full body.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (default 1). | |
| limit | No | Page size (default 30, max 100). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide 'readOnlyHint'. Description adds pagination behavior and return type (lightweight summaries), which are beyond the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key purpose, no redundant words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains return type (lightweight summaries). Could mention default ordering, but not essential for a simple list tool. Contextually complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. Description does not add additional meaning beyond what the schema already provides for 'page' and 'limit'. Baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'list', resource 'native articles on this microsite', and specifies they are clinician-reviewed. Distinguishes from sibling 'get_article' by noting it returns lightweight summaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises calling 'get_article' for full body, implying this is for overview. Does not mention when to use 'search_articles' instead, but the purpose is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_articlesARead-onlyInspect
Full-text search of clinician-reviewed pediatric psychiatry articles published on Psychiatry for Teens, ranked by relevance. Use to find guidance for teenagers and their parents.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50). | |
| query | Yes | Free-text query. Matches title and summary. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds context about the corpus (clinician-reviewed, Pediatrics for Teens) and ranking by relevance, which is beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: first defines functionality, second suggests usage. No unnecessary words, front-loaded with key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description mentions ranking which hints at output order. Could elaborate on return format, but given simplicity (2 params), it's largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and parameter descriptions are comprehensive. Description doesn't add meaningful new semantics beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states full-text search of clinician-reviewed pediatric psychiatry articles from a specific site, ranked by relevance. It differentiates from sibling tools like list_articles (list without search) and get_article (retrieve specific article).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use to find guidance for teenagers and their parents,' indicating when to use. Lacks explicit when-not-to-use or alternative tools, but the sibling list provides context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!