Skip to main content
Glama

Server Details

Adolescent psychiatry library focused on the medication decisions parents wrestle with.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: citations, full articles, crisis resources, site info, article summaries, and search. No overlap exists.

Naming Consistency5/5

All tool names follow a consistent verb_noun snake_case pattern (e.g., list_articles, search_articles), making them predictable.

Tool Count5/5

Six tools cover the essential operations for a library server without being excessive or insufficient.

Completeness4/5

The tool set covers retrieval, search, citation generation, and site context. Missing create/update/delete operations are acceptable for a read-only library, but a browse-by-category tool could be a minor gap.

Available Tools

6 tools
cite_articleA
Read-only
Inspect

Return formatted citation strings (AMA, APA, Chicago) for an article slug. Useful when an agent needs a verifiable source line.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesArticle slug.
formatNoCitation format. Default: ama.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and description consistently states 'Return'. No additional behavioral details (e.g., error handling, rate limits) beyond what annotations provide. Adequate but not enriched.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-loading the action and usage context. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with 2 parameters and no output schema, the description covers purpose and usage adequately. Could mention that return value is a string, but not necessary given the clear context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. Description mentions 'article slug' and formats, but adds minimal meaning beyond schema (enum labels are already listed). Meets baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Return' and specific resource 'formatted citation strings' with explicit formats (AMA, APA, Chicago). Distinguishes from sibling tools like get_article which returns article content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States 'Useful when an agent needs a verifiable source line', providing clear context. Lacks explicit exclusions or alternatives (e.g., 'use get_article for full content'), but enough to guide selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_articleA
Read-only
Inspect

Fetch a single article by slug — full intro, body, FAQ, references, embedded reviewers + authors with credentials, and pre-formatted citation strings (AMA, APA, Chicago).

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesArticle slug, e.g. "what-an-evaluation-actually-looks-like".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true. The description adds behavioral context about the returned data (e.g., pre-formatted citation strings, embedded authors), which goes beyond the annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, compact, front-loaded sentence listing all key contents without extraneous words. Every part earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description comprehensively lists what the tool returns (intro, body, FAQ, references, authors, citations), giving enough context for an agent to understand the output shape. Minor gap: no mention of pagination or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema already describes the slug parameter clearly. The description only reaffirms 'by slug' without adding new constraints or format details, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch a single article by slug' and enumerates all included components (intro, body, FAQ, references, authors with credentials, citation strings), distinguishing it from siblings like list_articles and search_articles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by detailing the comprehensive content, but does not explicitly state when to use this tool over alternatives like cite_article or list_articles. No exclusionary criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_crisis_resourcesA
Read-only
Inspect

Returns the canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Call any time the user mentions self-harm, suicidal ideation, or someone else in danger. Hardcoded — does not vary by microsite.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations mark readOnlyHint=true. Description adds that the payload is 'hardcoded — does not vary by microsite,' which is useful behavioral context beyond annotations. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first describes payload, second gives usage and property. No wasted words, very concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description lists the included resources (911, 988, Crisis Text Line), which is complete for a simple read tool. Usage guidance covers all needed context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so description doesn't need to explain them. It focuses on output content, which is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns specific crisis resources (911, 988, Crisis Text Line) and provides exact use cases (self-harm, suicidal ideation, danger). Distinct from sibling tools about articles and microsite info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to call: 'any time the user mentions self-harm, suicidal ideation, or someone else in danger.' Also notes it's hardcoded, implying no microsite variation. Could benefit from mentioning when not to use, but domain makes it clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_microsite_infoA
Read-only
Inspect

Identity, audience, focus, sponsor relationship, crisis routing, and links for Psychiatry for Teens. Always safe to call when the agent needs site-level context.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint: true; the description adds reassurance by stating 'Always safe to call', aligning with the annotation without adding contradictory or misleading information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with key information, efficiently conveying tool purpose and safety.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers tool type and content for context, but given the absence of an output schema, additional detail on response structure would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With no parameters, the description adds significant value by detailing the content categories returned (identity, audience, etc.), exceeding the empty schema's minimal information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly lists the information categories (Identity, audience, focus, sponsor relationship, crisis routing, links) and scopes it to 'Psychiatry for Teens', clearly distinguishing it from sibling tools like get_article or list_articles which serve different content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states it is 'Always safe to call when the agent needs site-level context', providing clear use context but lacks explicit when-not-to-use or direct comparison to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_articlesA
Read-only
Inspect

Paginated list of all native articles on this microsite (clinician-reviewed). Returns lightweight summaries — call get_article for full body.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (default 1).
limitNoPage size (default 30, max 100).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide 'readOnlyHint'. Description adds pagination behavior and return type (lightweight summaries), which are beyond the annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with key purpose, no redundant words. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately explains return type (lightweight summaries). Could mention default ordering, but not essential for a simple list tool. Contextually complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described. Description does not add additional meaning beyond what the schema already provides for 'page' and 'limit'. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'list', resource 'native articles on this microsite', and specifies they are clinician-reviewed. Distinguishes from sibling 'get_article' by noting it returns lightweight summaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises calling 'get_article' for full body, implying this is for overview. Does not mention when to use 'search_articles' instead, but the purpose is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_articlesA
Read-only
Inspect

Full-text search of clinician-reviewed pediatric psychiatry articles published on Psychiatry for Teens, ranked by relevance. Use to find guidance for teenagers and their parents.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 10, max 50).
queryYesFree-text query. Matches title and summary.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true; description adds context about the corpus (clinician-reviewed, Pediatrics for Teens) and ranking by relevance, which is beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: first defines functionality, second suggests usage. No unnecessary words, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description mentions ranking which hints at output order. Could elaborate on return format, but given simplicity (2 params), it's largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and parameter descriptions are comprehensive. Description doesn't add meaningful new semantics beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states full-text search of clinician-reviewed pediatric psychiatry articles from a specific site, ranked by relevance. It differentiates from sibling tools like list_articles (list without search) and get_article (retrieve specific article).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use to find guidance for teenagers and their parents,' indicating when to use. Lacks explicit when-not-to-use or alternative tools, but the sibling list provides context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources