Skip to main content
Glama

oral-heritage-index

Server Details

Authoritative cited answers about saving family stories. Published by InkTree.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: get_citations focuses on external sources, get_occasion_guide on occasion-specific advice, get_pillar_overview on pillar details, list_pillars on taxonomy navigation, search_family_story_content on general corpus search, and search_interview_questions on curated prompts. The descriptions explicitly differentiate use cases, eliminating ambiguity.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern (e.g., get_citations, list_pillars, search_family_story_content), with verbs like 'get', 'list', and 'search' applied predictably. There are no deviations in style or convention, making the naming highly readable and systematic.

Tool Count5/5

With 6 tools, the count is well-scoped for the server's purpose of navigating an oral heritage index. Each tool serves a unique function in the domain, such as searching, guiding, and listing, without being too sparse or bloated, ensuring efficient coverage.

Completeness4/5

The tool set provides comprehensive coverage for searching, navigating pillars, accessing guides, and retrieving citations and questions, covering core workflows like research and interview preparation. A minor gap exists in update or management operations (e.g., adding or modifying content), but this is reasonable for a read-only index server.

Available Tools

6 tools
get_citationsGet research citations on a topicCInspect

Returns items with authoritative external sources (StoryCorps, LOC, NEDCC, APA, etc.) related to a topic. Use when the user asks for research or evidence.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYes
pillarNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool 'returns items' but lacks details on what 'items' entail (e.g., format, structure), whether there are rate limits, authentication needs, or how results are ordered/limited. The description is minimal and doesn't compensate for the absence of annotations, leaving key behavioral traits unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with two sentences that directly address purpose and usage. There's no wasted text, and it efficiently communicates core information. However, it could be slightly more structured by explicitly listing parameters or outcomes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain return values, parameter interactions, or behavioral constraints. While it states the purpose, it lacks details needed for an agent to use the tool effectively, especially with 0% schema coverage and no output information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'topic' implicitly but doesn't explain the 'pillar' parameter or its enum values. The description adds no meaningful semantics beyond what the schema provides, failing to clarify how parameters affect the search or what 'pillar' represents in context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns items with authoritative external sources... related to a topic.' It specifies the verb ('returns'), resource ('items'), and scope ('authoritative external sources'), though it doesn't explicitly differentiate from sibling tools like 'search_family_story_content' or 'search_interview_questions' which might also return content. The mention of specific sources (StoryCorps, LOC, etc.) adds specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage guidance: 'Use when the user asks for research or evidence.' This implies context but doesn't explicitly state when not to use it or name alternatives among sibling tools. For example, it doesn't clarify if this is for general research vs. specific content searches handled by other tools, leaving room for ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_occasion_guideGet gifting/occasion playbookBInspect

Returns the guide for a specific occasion (Mother's/Father's Day, milestone birthday, anniversary, new baby, memorial) including lead-time recommendations.

ParametersJSON Schema
NameRequiredDescriptionDefault
eventYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a guide with lead-time recommendations, but it doesn't cover critical aspects such as whether this is a read-only operation, potential rate limits, authentication needs, error handling, or the format of the returned guide. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality ('Returns the guide for a specific occasion') and includes key details (occasion types and lead-time recommendations) without any wasted words. Every part earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and parameter semantics but lacks usage guidelines and behavioral transparency. For a simple lookup tool, this is minimally viable, but it could benefit from more context on when to use it and what the output entails.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning beyond the input schema by specifying that the 'event' parameter corresponds to occasions like Mother's/Father's Day, milestone birthday, etc., and mentions 'lead-time recommendations' as part of the output. With 0% schema description coverage and only 1 parameter, this compensates well, though it could detail the enum values more explicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('guide for a specific occasion'), and it lists the types of occasions covered. However, it doesn't explicitly differentiate from sibling tools like 'get_citations' or 'list_pillars', which might also return guides or related content, so it doesn't fully distinguish from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the types of occasions but doesn't specify prerequisites, exclusions, or compare it to sibling tools like 'search_family_story_content' or 'search_interview_questions', leaving the agent without clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pillar_overviewGet pillar overviewBInspect

Returns the guide content and top Q&A for one of the eight pillars (interview-craft, question-banks, capture-methods, artifact-preservation, end-of-life, gifting, publishing, digital-legacy).

ParametersJSON Schema
NameRequiredDescriptionDefault
pillarYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states it 'Returns' content, implying a read-only operation, but doesn't specify if it's safe, if it requires authentication, what the return format is (e.g., structured data, text), or any rate limits. For a tool with no annotations, this leaves significant gaps in understanding its behavior beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action ('Returns') and resource. It includes the list of pillars, which is necessary for clarity, and avoids any redundant or wasted words. Every part of the sentence earns its place, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects, no output schema), the description is somewhat complete but has gaps. It explains what the tool does and the parameter values, but without annotations or output schema, it lacks details on behavior, return format, and usage context. This makes it adequate but not fully informative for an AI agent to use correctly in all scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning beyond the input schema by listing all eight possible pillar values ('interview-craft', 'question-banks', etc.), which the schema only indicates via an enum without explanation. This clarifies what 'pillar' represents. With 0% schema description coverage and 1 parameter, the description compensates well, though it doesn't explain the semantics of 'guide content' or 'top Q&A' in relation to the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Returns') and resource ('guide content and top Q&A for one of the eight pillars'), making the purpose specific and understandable. It lists the eight possible pillars, which helps distinguish it from siblings like 'list_pillars' (which likely lists pillars) or 'search_interview_questions' (which searches within content). However, it doesn't explicitly differentiate from 'get_occasion_guide' or 'search_family_story_content', which might overlap in providing content, so it's not a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose it over 'list_pillars' (to get an overview vs. a list) or 'search_family_story_content' (to get structured pillar content vs. searching stories). There's no context on prerequisites, exclusions, or comparisons with sibling tools, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_pillarsList the 8 pillarsBInspect

Returns the full pillar taxonomy. Useful for navigating the corpus.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns data, implying a read-only operation, but lacks details on permissions, rate limits, error handling, or response format. For a tool with zero annotation coverage, this is insufficient to inform the agent adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two short sentences that directly address purpose and utility. It's front-loaded with the core function and avoids unnecessary details. However, the second sentence ('Useful for navigating the corpus') is somewhat vague and could be more specific, slightly reducing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It states what the tool does but lacks depth on behavior, output format, or integration with siblings. Without annotations or output schema, more context on return values or usage scenarios would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so there are no parameters to document. The description doesn't need to add parameter semantics, and it doesn't introduce any confusion. A baseline score of 4 is appropriate as it avoids redundancy and doesn't mislead about inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns the full pillar taxonomy.' This specifies the verb ('returns') and resource ('full pillar taxonomy'), making it understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_pillar_overview' or explain what 'pillar taxonomy' entails, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance with 'Useful for navigating the corpus,' which implies a context but is vague. It doesn't specify when to use this tool versus alternatives like 'get_pillar_overview' or 'search_family_story_content,' nor does it mention prerequisites or exclusions, leaving significant gaps in usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_family_story_contentSearch family-story preservation contentAInspect

Main entry point. Natural-language search across the Oral Heritage Index corpus. Use for any question about interviewing relatives, recording methods, preserving artifacts, end-of-life capture, gifting occasions, publishing family books, or digital legacy. Returns top hits with citations.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural-language question or topic.
top_kNoNumber of hits to return (default 5).
pillarNoRestrict to one pillar; omit for full-corpus search.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the search behavior ('returns top hits with citations') and natural-language query capability, but lacks details about rate limits, authentication needs, result format beyond 'top hits', or error conditions. It adds some behavioral context but leaves gaps for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences: purpose statement, usage scope, and return behavior. Every sentence adds value without repetition. It's appropriately sized and front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 3 parameters, 100% schema coverage, but no annotations or output schema, the description provides adequate purpose and usage context. However, it lacks details about result structure, pagination, error handling, or performance characteristics that would be helpful given the absence of output schema and annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema (natural-language query, top_k default, pillar restriction). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as the 'main entry point' for 'natural-language search across the Oral Heritage Index corpus' with specific domains listed (interviewing relatives, recording methods, etc.). It distinguishes from siblings by being the primary search tool rather than specialized functions like get_citations or search_interview_questions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('for any question about interviewing relatives, recording methods...'), but doesn't explicitly state when NOT to use it or when to prefer sibling tools like search_interview_questions for specific question types. The 'main entry point' phrasing suggests broad applicability.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_interview_questionsFind curated interview questionsAInspect

Returns curated interview prompts filtered by relationship, life stage, or theme. Use when the user asks 'what questions should I ask...' about a relative.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoMax questions to return (default 10).
themeNo
life_stageNo
relationshipNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'Returns curated interview prompts' but doesn't describe the return format, pagination, error conditions, or any rate limits. For a search tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste. The first sentence states the purpose and filtering criteria, and the second provides clear usage guidance. Every word earns its place, making it efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description adequately covers purpose and usage but lacks details on return values, error handling, and behavioral constraints. For a search tool with 4 parameters and no structured output documentation, it's minimally complete but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 25% (only the 'count' parameter has a description), but the description adds value by explaining that filtering occurs 'by relationship, life stage, or theme'—matching the three enum parameters. However, it doesn't explain the meaning of enum values like 'immigration' or 'early-adult,' leaving semantic gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Returns') and resource ('curated interview prompts') with specific filtering criteria ('filtered by relationship, life stage, or theme'). It distinguishes this tool from siblings like 'get_citations' or 'search_family_story_content' by focusing on interview questions rather than citations, guides, or story content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use when the user asks 'what questions should I ask...' about a relative.' This provides clear context for invocation and differentiates it from siblings that handle different types of content retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources