Remember Me Collections
Server Details
Browse published Bible verse collections for memorization — multilingual, free, spaced repetition
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
3 toolsbrowse_collectionsARead-onlyIdempotentInspect
Browse published Bible verse collections. Search by keyword, filter by language, sort by popularity.
Args: search: Search term to filter by name, description, or publisher name. language: Language code prefix (e.g. "en", "de", "ja", "zh"). ordering: Sort order: -downloads (default), -created, name. limit: Number of results (1-100, default 20). offset: Starting position for pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| search | No | ||
| language | No | ||
| ordering | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, idempotent, non-destructive behavior. The description adds valuable behavioral context not in annotations: it specifies sort options ('-downloads (default), -created, name'), explains pagination mechanics, and clarifies that 'published' collections are browsed. It does not mention rate limits or authentication requirements, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a single-sentence purpose statement followed by a clear Args block. Every line provides specific constraints or examples (e.g., language prefixes, ordering options). No filler text or redundant explanations are present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (so return values need not be described), the description adequately covers the tool's functionality. It documents all parameters and basic behavior. It could be improved by mentioning the pagination pattern or result set characteristics, but it is sufficient for agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (parameters have only titles), the description fully compensates by documenting all 5 parameters in the Args section: it explains that 'search' covers name/description/publisher, provides language code examples ('en', 'de', 'ja'), specifies the ordering enum values and default, and details the limit range (1-100) and offset purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with 'Browse published Bible verse collections' which provides a specific verb (browse) and resource (Bible verse collections). This clearly distinguishes it from sibling tools 'get_collection_detail' and 'get_collection_metrics' by implying this is for listing/searching rather than retrieving specific records or analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the verb 'browse' implicitly suggests this is for discovery and listing versus the sibling 'get_collection_detail' (which implies retrieval by ID), there is no explicit guidance on when to use this tool versus alternatives. No 'when not to use' or prerequisite conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_collection_detailARead-onlyIdempotentInspect
Get full details of a published collection including all verse text, references, and topics.
Args: collection_id: The collection ID (from browse_collections results).
| Name | Required | Description | Default |
|---|---|---|---|
| collection_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare read-only/idempotent status, the description adds critical behavioral context: it specifies 'published' collections (indicating a filter), and warns about payload size by mentioning 'all verse text' (suggesting large responses). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently front-loaded with no wasted words. The main sentence covers purpose and content, while the Args section provides necessary parameter context. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter lookup tool with output schema available, the description is complete. It covers the resource constraints (published), return content structure, and parameter source without needing to describe return format (handled by output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (only 'title': 'Collection Id'), the description fully compensates by explaining the parameter semantics: it defines collection_id and specifies its provenance '(from browse_collections results),' which is essential for an ID parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific verb (Get), resource (published collection), and detailed scope (full details including verse text, references, and topics). It clearly distinguishes from browse_collections (list vs. detail) and get_collection_metrics (content vs. metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The Args section implies a workflow by stating the collection_id comes 'from browse_collections results,' suggesting prerequisite usage. However, it lacks explicit when-to-use guidance contrasting with get_collection_metrics or explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_collection_metricsARead-onlyIdempotentInspect
Get community engagement metrics: memorization progress, verse mastery, difficult verses, and activity stats.
Args: collection_id: The collection ID (from browse_collections results).
| Name | Required | Description | Default |
|---|---|---|---|
| collection_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations already cover read-only, idempotent, and non-destructive traits, the description adds valuable behavioral context by enumerating the specific metric categories returned (memorization progress, verse mastery, etc.), helping the agent understand the data scope without needing to inspect the output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero waste: the first sentence defines the tool's purpose and output, while the Args section provides the single required parameter's semantics. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one simple parameter and an output schema exists (reducing the need to describe return values in prose), the description is complete. It could be improved by briefly distinguishing its metrics focus from 'get_collection_detail'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (the 'collection_id' property lacks a description field), the description fully compensates by explaining the parameter's purpose and origin ('from browse_collections results'), giving the agent necessary context to provide the correct value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly defines the resource (community engagement metrics). It distinguishes itself from sibling tools 'browse_collections' and 'get_collection_detail' by specifying the exact data types returned: memorization progress, verse mastery, difficult verses, and activity stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a useful prerequisite hint that the collection_id comes 'from browse_collections results,' implying a workflow sequence. However, it lacks explicit guidance on when to use this versus the sibling 'get_collection_detail' or exclusions for when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!