Skip to main content
Glama

Server Details

Bible translations, books, chapters, verses, and search

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: bible_books lists books, bible_chapter retrieves chapter content, bible_search performs keyword searches, bible_translations lists translations, and bible_verse gets specific verses. The descriptions make it easy to differentiate between them, eliminating any risk of misselection.

Naming Consistency5/5

All tool names follow a consistent 'bible_' prefix with descriptive nouns (books, chapter, search, translations, verse), creating a predictable and readable pattern. There are no deviations in style or convention, making the set highly uniform.

Tool Count5/5

With 5 tools, this server is well-scoped for a Bible API, covering core operations like listing translations/books, retrieving chapters/verses, and searching. Each tool earns its place without feeling thin or overloaded, fitting typical expectations for such a domain.

Completeness4/5

The toolset provides strong coverage for reading and searching the Bible, including CRUD-like operations for data retrieval. A minor gap exists in update/delete functionality (e.g., for user notes or bookmarks), but this is reasonable for a read-focused API, and agents can work around it effectively.

Available Tools

5 tools
bible_booksList books in a translationA
Read-onlyIdempotent
Inspect

Return books and chapter counts for a translation.

ParametersJSON Schema
NameRequiredDescriptionDefault
translationYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description's burden is lower. It adds value by specifying the output includes 'chapter counts', which is not indicated in annotations or schema, providing useful context beyond the structured data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words, front-loading the key action and resource. It efficiently conveys the core functionality without unnecessary elaboration, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter, no output schema) and rich annotations, the description is mostly complete. It specifies what is returned ('books and chapter counts'), which addresses the lack of output schema. However, it could improve by hinting at the output structure or usage context relative to siblings, slightly limiting completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions 'for a translation', aligning with the single required parameter 'translation', but adds no further semantics (e.g., format, examples, constraints). With 0% schema description coverage, it partially compensates by linking the parameter to the tool's purpose, but does not fully address the lack of schema details, meeting the baseline for minimal parameter info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Return') and the resource ('books and chapter counts for a translation'), making the purpose understandable. However, it does not explicitly differentiate from sibling tools like 'bible_translations' (which might list translations rather than books within one), leaving room for slight ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. For example, it does not specify if this is for listing all books in a translation as opposed to searching for specific content with 'bible_search' or fetching details with 'bible_chapter'/'bible_verse', leaving the agent to infer usage from context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bible_chapterFetch a chapterB
Read-onlyIdempotent
Inspect

Return all verses, groupings, and red letter data for a chapter.

ParametersJSON Schema
NameRequiredDescriptionDefault
bookYes
chapterYes
translationYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed world, so the description doesn't need to repeat safety aspects. It adds value by specifying the return content ('verses, groupings, and red letter data'), which isn't covered by annotations, but lacks details on behavior like error handling or response format. No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action and resource. It wastes no words and is appropriately sized for a simple fetch operation, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a fetch tool with three required parameters, 0% schema coverage, and no output schema, the description is incomplete. It doesn't explain parameter semantics, return values beyond a vague list, or usage context, failing to compensate for the lack of structured data. This leaves significant gaps for an agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description doesn't compensate by explaining what 'translation', 'book', or 'chapter' mean (e.g., valid formats, examples, or constraints), leaving semantics unclear. This is inadequate given the low coverage and three required parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Return') and resource ('verses, groupings, and red letter data for a chapter'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'bible_verse' (which likely returns a single verse) or 'bible_search' (which likely searches across texts), leaving room for ambiguity in tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'bible_verse' or 'bible_search'. It lacks context about prerequisites, such as valid values for 'translation' or 'book', or any exclusions, leaving the agent to infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bible_translationsList Bible translationsB
Read-onlyIdempotent
Inspect

Return all available Bible translations.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as a read-only, non-destructive, idempotent operation with a closed-world assumption. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or response format details. Since annotations cover the safety profile adequately, a baseline score of 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core purpose and appropriately sized for a simple tool, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details about the return format (e.g., list structure, translation metadata) that would help an agent use the output effectively, especially without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the schema fully documents the input requirements. The description doesn't need to compensate for any gaps, so it meets the baseline for a parameterless tool. No additional parameter semantics are required or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Return') and resource ('all available Bible translations'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'bible_books' or 'bible_search' beyond the resource type, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'bible_search' or 'bible_books'. It lacks context about use cases, prerequisites, or exclusions, leaving the agent to infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bible_verseFetch a verseB
Read-onlyIdempotent
Inspect

Return a specific verse, with red letter data if available.

ParametersJSON Schema
NameRequiredDescriptionDefault
bookYes
verseYes
chapterYes
translationYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds 'with red letter data if available' which provides useful behavioral context about optional formatting output. However, it doesn't disclose other behavioral traits like rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at 10 words, with zero wasted language. It's front-loaded with the core purpose ('Return a specific verse') and adds only essential additional context ('with red letter data if available'). Every word earns its place in this minimal but complete statement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with good annotations (covering safety and idempotency) but no output schema and 0% parameter documentation, the description is minimally adequate. It states what the tool does and adds some output formatting context, but leaves parameters completely unexplained and provides no usage guidance. The annotations carry significant weight, but parameter understanding remains a gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'red letter data' which relates to output formatting, not input parameters. It doesn't explain what the four required parameters (translation, book, chapter, verse) mean or provide examples. With 0% coverage and 4 parameters, the description fails to compensate for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return a specific verse' specifies the verb (return) and resource (verse). It distinguishes from sibling tools like bible_chapter (returns a chapter) and bible_search (searches across verses), but doesn't explicitly differentiate from bible_books or bible_translations. The addition of 'with red letter data if available' provides useful context about optional formatting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like bible_chapter (for full chapters) or bible_search (for searching content), nor does it specify prerequisites or constraints beyond what's implied by the parameters. The agent must infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources