Skip to main content
Glama

Remember Me Collections

Server Details

Browse Bible verse collections from Remember Me, a free memorization app in 48 languages

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: browse_collections for searching/filtering collections, get_collection_detail for retrieving full content, and get_collection_metrics for engagement analytics. There is no overlap in functionality, and an agent can easily distinguish between them based on their descriptions.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (browse_collections, get_collection_detail, get_collection_metrics) with clear, descriptive verbs. The naming is uniform and predictable throughout the set.

Tool Count3/5

With only 3 tools, the server feels thin for managing Bible verse collections. While the tools cover browsing, detail retrieval, and metrics, the absence of CRUD operations (e.g., create, update, delete) or user-specific actions suggests the scope is limited, making the count borderline appropriate.

Completeness2/5

The tool set is severely incomplete for a collections management domain. It lacks essential operations like creating, updating, or deleting collections, as well as user-centric tools (e.g., subscribe, track progress). The surface only supports read-only access to published data, which will cause agent failures in broader workflows.

Available Tools

3 tools
browse_collectionsA
Read-onlyIdempotent
Inspect

Browse published Bible verse collections. Search by keyword, filter by language, sort by popularity.

Args: search: Search term to filter by name, description, or publisher name. language: Language code prefix (e.g. "en", "de", "ja", "zh"). ordering: Sort order: -downloads (default), -created, name. limit: Number of results (1-100, default 20). offset: Starting position for pagination.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
searchNo
languageNo
orderingNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations by specifying that it browses 'published' collections (implying there may be unpublished ones), and mentions pagination behavior through limit/offset parameters. While annotations cover safety (readOnlyHint=true, destructiveHint=false, idempotentHint=true), the description adds practical usage context about what kind of data is accessible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by detailed parameter explanations. Every sentence earns its place - the first sentence establishes purpose and capabilities, and the Args section efficiently documents all parameters with clear examples and constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a read-only browsing tool with comprehensive annotations, 0% schema description coverage compensated by excellent parameter documentation in the description, and the existence of an output schema (which means return values don't need explanation), the description is complete enough. It covers purpose, usage context, and all parameter semantics thoroughly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description carries the full burden of explaining parameters and does so comprehensively. It provides clear semantics for all 5 parameters: search term scope, language code format, ordering options with defaults, limit range, and offset purpose for pagination. This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('browse') and resource ('published Bible verse collections'), and distinguishes this tool from its siblings by specifying it's for browsing/searching collections rather than getting detailed information (get_collection_detail) or metrics (get_collection_metrics). The description explicitly mentions search, filter, and sort capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('browse published Bible verse collections'), but doesn't explicitly state when NOT to use it or directly compare it to the sibling tools. The mention of search/filter/sort capabilities implies usage scenarios, but lacks explicit 'use this when X, use sibling Y when Z' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_collection_detailA
Read-onlyIdempotent
Inspect

Get full details of a published collection including all verse text, references, and topics.

Args: collection_id: The collection ID (from browse_collections results).

ParametersJSON Schema
NameRequiredDescriptionDefault
collection_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide safety and behavior hints (readOnly, non-destructive, idempotent, closed-world). The description adds valuable context by specifying it retrieves 'full details' including specific content types (verse text, references, topics), which helps the agent understand the depth and nature of the returned data beyond what annotations indicate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by a concise parameter explanation. Every sentence adds value: the first defines the tool's scope, and the second clarifies the parameter's usage. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter), rich annotations covering safety and behavior, and the presence of an output schema (which handles return values), the description is complete. It provides purpose, parameter guidance, and content details, leaving no significant gaps for the agent to operate effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by clearly explaining the single parameter's purpose and source ('collection_id: The collection ID (from browse_collections results)'). This adds essential meaning beyond the bare schema, making the parameter's role and origin clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full details') and resource ('published collection'), with explicit scope ('including all verse text, references, and topics'). It distinguishes from sibling tools (browse_collections, get_collection_metrics) by focusing on detailed content rather than listing or metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by specifying it's for 'published collection' details and referencing the source of collection_id ('from browse_collections results'), which helps guide usage. However, it doesn't explicitly state when not to use it or name alternatives like get_collection_metrics for different purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_collection_metricsA
Read-onlyIdempotent
Inspect

Get community engagement metrics: memorization progress, verse mastery, difficult verses, and activity stats.

Args: collection_id: The collection ID (from browse_collections results).

ParametersJSON Schema
NameRequiredDescriptionDefault
collection_idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral hints (readOnlyHint: true, destructiveHint: false, etc.), so the description's burden is lower. It adds useful context by specifying the types of metrics retrieved (e.g., memorization progress, activity stats), which isn't covered by annotations. No contradictions with annotations are present, and it offers additional operational insight.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose in the first sentence and following with parameter details. Every sentence adds value, with no wasted words, though the structure could be slightly more polished (e.g., bullet points for metrics).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation with one parameter), rich annotations (covering safety and idempotency), and the presence of an output schema (which handles return values), the description is sufficiently complete. It explains what metrics are retrieved and parameter semantics, addressing key gaps without overloading with redundant information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that 'collection_id' is 'The collection ID (from browse_collections results)', adding semantic meaning about the parameter's source and purpose beyond the schema's basic type and title. This effectively clarifies the parameter, though it could be more detailed (e.g., format or constraints).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get community engagement metrics') and resources ('memorization progress, verse mastery, difficult verses, and activity stats'), making it easy to understand what data is retrieved. However, it doesn't explicitly differentiate from sibling tools like 'get_collection_detail', which might also provide collection-related information, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by referencing 'collection_id (from browse_collections results)', suggesting a workflow where this tool follows browsing, but it doesn't explicitly state when to use this tool versus alternatives like 'get_collection_detail' or provide clear exclusions. This gives some context but lacks detailed guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources