Skip to main content
Glama

Server Details

Clinician-reviewed library on child psychiatric evaluation and medication decision-making.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: citations, article retrieval, crisis resources, microsite info, article listing, and search. No overlap or ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun snake_case pattern (e.g., cite_article, get_article, list_articles), making them predictable and easy to differentiate.

Tool Count5/5

With 6 tools, the set is well-scoped for a library server covering essential operations without being overly sparse or bloated.

Completeness5/5

The tool set covers core article lifecycle (list, search, get), citations, site context, and crisis support. No obvious gaps for a read-only repository.

Available Tools

6 tools
cite_articleA
Read-only
Inspect

Return formatted citation strings (AMA, APA, Chicago) for an article slug. Useful when an agent needs a verifiable source line.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesArticle slug.
formatNoCitation format. Default: ama.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, and the description does not contradict. The description adds that it returns formatted citation strings, but no additional behavioral traits (e.g., error handling, rate limits) are disclosed, so it is adequate but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose. Every word adds value, and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameters (2, 1 required, 1 enum) and no output schema, the description sufficiently covers what the tool does and when to use it. It is complete for the tool's complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description adds no extra meaning beyond the schema, achieving the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns formatted citation strings for an article slug, specifying the formats (AMA, APA, Chicago). This distinctively differentiates it from sibling tools like get_article or search_articles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'useful when an agent needs a verifiable source line,' which provides clear usage context. However, it does not specify when not to use or mention alternatives, so it lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_articleA
Read-only
Inspect

Fetch a single article by slug — full intro, body, FAQ, references, embedded reviewers + authors with credentials, and pre-formatted citation strings (AMA, APA, Chicago).

ParametersJSON Schema
NameRequiredDescriptionDefault
slugYesArticle slug, e.g. "what-an-evaluation-actually-looks-like".
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, and the description adds value by detailing exactly what the response contains (full intro, body, FAQ, references, embedded authors, citation strings), providing complete behavioral transparency beyond the annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence that front-loads the core action and lists all components efficiently, with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter read tool with no output schema, the description fully covers what the tool does, what it returns, and how it differs from siblings, leaving no gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the slug parameter. The description echoes 'by slug' but adds no additional semantic value beyond the schema's own description and example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch' and resource 'single article by slug', and lists the included components (intro, body, FAQ, etc.), distinguishing it from sibling tools like list_articles or search_articles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when retrieving a single article by slug, but does not explicitly state when not to use it or compare to alternatives like list_articles for multiple articles.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_crisis_resourcesA
Read-only
Inspect

Returns the canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Call any time the user mentions self-harm, suicidal ideation, or someone else in danger. Hardcoded — does not vary by microsite.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and description adds that the payload is hardcoded and does not vary by microsite, fully disclosing behavior beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and usage. No extraneous words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with zero parameters and no output schema, the description completely covers what the tool does, its return value, and when to invoke it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, so schema coverage is 100%. Description adds context about the return value (list of resources) but no parameter explanation needed. Baseline 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns a canonical crisis-resource payload listing specific resources (911, 988, Crisis Text Line). Distinct from sibling tools that handle articles, so purpose is unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Call any time the user mentions self-harm, suicidal ideation, or someone else in danger.' Provides clear trigger conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_microsite_infoA
Read-only
Inspect

Identity, audience, focus, sponsor relationship, crisis routing, and links for Psychiatry for Children. Always safe to call when the agent needs site-level context.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true. The description supplements by confirming safety and listing the types of information returned, adding context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with concrete details, and a second sentence reinforcing safety. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no parameters, no output schema, and read-only annotation, the description fully informs the agent about what the tool returns and when to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. The description is not required to add parameter details since there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'Identity, audience, focus, sponsor relationship, crisis routing, and links for Psychiatry for Children' and that it is for site-level context. This distinguishes it from sibling tools that handle articles or crisis resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Always safe to call when the agent needs site-level context,' indicating appropriate usage. While it doesn't list when not to use, the context signals (no parameters, read-only annotation) and sibling names imply alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_articlesA
Read-only
Inspect

Paginated list of all native articles on this microsite (clinician-reviewed). Returns lightweight summaries — call get_article for full body.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (default 1).
limitNoPage size (default 30, max 100).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark read-only; description adds context about pagination, lightweight summaries, clinician-review, and microsite scope. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero wasted words. Front-loaded with main action. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks details about output structure (lightweight summaries not described), sorting, or default filters. Acceptable for a simple list tool but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with descriptions. Description adds no new parameter-specific meaning beyond mentioning pagination. Baseline 3 due to high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all native articles with pagination, and distinguishes it from siblings like get_article (full body) and search_articles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly tells when to use this tool (for a paginated list) and recommends calling get_article for full body. Implicitly, it contrasts with search_articles but doesn't explicitly state when to search instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_articlesA
Read-only
Inspect

Full-text search of clinician-reviewed pediatric psychiatry articles published on Psychiatry for Children, ranked by relevance. Use to find guidance for parents and caregivers of children.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 10, max 50).
queryYesFree-text query. Matches title and summary.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds that results are ranked by relevance and that it performs full-text search, which goes beyond the readOnlyHint annotation. It does not contradict annotations and provides useful behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no redundant words. The first sentence states the core functionality, and the second provides the use case. Ideal front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with 2 parameters and no output schema, the description covers what the tool does and why to use it. It could mention that results are lists of articles, but it's sufficiently complete for the given complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% so baseline is 3. The description adds 'full-text search' and 'ranked by relevance,' which clarifies the behavior beyond the schema's 'matches title and summary.' The word 'full-text' might cause slight confusion with the schema, but overall adds value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (full-text search), the resource (clinician-reviewed pediatric psychiatry articles), the source (Psychiatry for Children), the ranking (by relevance), and the intended use (guidance for parents/caregivers). It effectively distinguishes this tool from siblings like list_articles or get_article.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use to find guidance for parents and caregivers of children,' which indicates when to use it. However, it does not specify when not to use it or mention alternatives like list_articles for browsing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources