Shepherd Bible API
Server Details
Bible translations, books, chapters, verses, and search
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose with no overlap: bible_books lists books, bible_chapter retrieves chapter content, bible_search performs keyword searches, bible_translations lists translations, and bible_verse gets specific verses. The descriptions make it easy to differentiate between them, eliminating any risk of misselection.
All tool names follow a consistent 'bible_' prefix with descriptive nouns (books, chapter, search, translations, verse), creating a predictable and readable pattern. There are no deviations in style or convention, making the set highly uniform.
With 5 tools, this server is well-scoped for a Bible API, covering core operations like listing translations/books, retrieving chapters/verses, and searching. Each tool earns its place without feeling thin or overloaded, fitting typical expectations for such a domain.
The toolset provides strong coverage for reading and searching the Bible, including CRUD-like operations for data retrieval. A minor gap exists in update/delete functionality (e.g., for user notes or bookmarks), but this is reasonable for a read-focused API, and agents can work around it effectively.
Available Tools
5 toolsbible_booksList books in a translationARead-onlyIdempotentInspect
Return books and chapter counts for a translation.
| Name | Required | Description | Default |
|---|---|---|---|
| translation | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description's burden is lower. It adds value by specifying the output includes 'chapter counts', which is not indicated in annotations or schema, providing useful context beyond the structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words, front-loading the key action and resource. It efficiently conveys the core functionality without unnecessary elaboration, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema) and rich annotations, the description is mostly complete. It specifies what is returned ('books and chapter counts'), which addresses the lack of output schema. However, it could improve by hinting at the output structure or usage context relative to siblings, slightly limiting completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'for a translation', aligning with the single required parameter 'translation', but adds no further semantics (e.g., format, examples, constraints). With 0% schema description coverage, it partially compensates by linking the parameter to the tool's purpose, but does not fully address the lack of schema details, meeting the baseline for minimal parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and the resource ('books and chapter counts for a translation'), making the purpose understandable. However, it does not explicitly differentiate from sibling tools like 'bible_translations' (which might list translations rather than books within one), leaving room for slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. For example, it does not specify if this is for listing all books in a translation as opposed to searching for specific content with 'bible_search' or fetching details with 'bible_chapter'/'bible_verse', leaving the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bible_chapterFetch a chapterBRead-onlyIdempotentInspect
Return all verses, groupings, and red letter data for a chapter.
| Name | Required | Description | Default |
|---|---|---|---|
| book | Yes | ||
| chapter | Yes | ||
| translation | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed world, so the description doesn't need to repeat safety aspects. It adds value by specifying the return content ('verses, groupings, and red letter data'), which isn't covered by annotations, but lacks details on behavior like error handling or response format. No contradiction with annotations is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key action and resource. It wastes no words and is appropriately sized for a simple fetch operation, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a fetch tool with three required parameters, 0% schema coverage, and no output schema, the description is incomplete. It doesn't explain parameter semantics, return values beyond a vague list, or usage context, failing to compensate for the lack of structured data. This leaves significant gaps for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description doesn't compensate by explaining what 'translation', 'book', or 'chapter' mean (e.g., valid formats, examples, or constraints), leaving semantics unclear. This is inadequate given the low coverage and three required parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and resource ('verses, groupings, and red letter data for a chapter'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'bible_verse' (which likely returns a single verse) or 'bible_search' (which likely searches across texts), leaving room for ambiguity in tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'bible_verse' or 'bible_search'. It lacks context about prerequisites, such as valid values for 'translation' or 'book', or any exclusions, leaving the agent to infer usage from the tool name and parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bible_searchSearch Bible versesBRead-onlyIdempotentInspect
Search for verses by keyword or exact match.
| Name | Required | Description | Default |
|---|---|---|---|
| q | Yes | ||
| page | No | ||
| type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description doesn't need to repeat these. It adds value by specifying the search types ('keyword or exact match'), which isn't in the annotations. However, it lacks details on rate limits, authentication needs, or result format, keeping it at a baseline level.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'Search for verses by keyword or exact match.' It's front-loaded with the core purpose, has zero wasted words, and is appropriately sized for a simple search tool, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema) and rich annotations, the description is minimally adequate. It states the purpose but lacks details on parameter usage, result format, or sibling tool differentiation. With annotations handling safety and behavior, it meets a basic threshold but has clear gaps in guidance and semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but only mentions 'keyword or exact match' for 'q', leaving 'page' and 'type' undocumented. It adds minimal meaning beyond the schema, failing to explain parameter purposes or usage, which is insufficient given the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search for verses by keyword or exact match.' It specifies the verb ('search') and resource ('Bible verses'), making the function immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'bible_verse' or 'bible_chapter' that might also retrieve verses, leaving room for ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'bible_verse' or 'bible_chapter'. It mentions 'keyword or exact match' but doesn't clarify scenarios where this is preferable over direct verse lookup or other search methods, offering minimal usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bible_translationsList Bible translationsBRead-onlyIdempotentInspect
Return all available Bible translations.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as a read-only, non-destructive, idempotent operation with a closed-world assumption. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or response format details. Since annotations cover the safety profile adequately, a baseline score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core purpose and appropriately sized for a simple tool, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details about the return format (e.g., list structure, translation metadata) that would help an agent use the output effectively, especially without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the input requirements. The description doesn't need to compensate for any gaps, so it meets the baseline for a parameterless tool. No additional parameter semantics are required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and resource ('all available Bible translations'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'bible_books' or 'bible_search' beyond the resource type, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'bible_search' or 'bible_books'. It lacks context about use cases, prerequisites, or exclusions, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bible_verseFetch a verseBRead-onlyIdempotentInspect
Return a specific verse, with red letter data if available.
| Name | Required | Description | Default |
|---|---|---|---|
| book | Yes | ||
| verse | Yes | ||
| chapter | Yes | ||
| translation | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds 'with red letter data if available' which provides useful behavioral context about optional formatting output. However, it doesn't disclose other behavioral traits like rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at 10 words, with zero wasted language. It's front-loaded with the core purpose ('Return a specific verse') and adds only essential additional context ('with red letter data if available'). Every word earns its place in this minimal but complete statement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with good annotations (covering safety and idempotency) but no output schema and 0% parameter documentation, the description is minimally adequate. It states what the tool does and adds some output formatting context, but leaves parameters completely unexplained and provides no usage guidance. The annotations carry significant weight, but parameter understanding remains a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'red letter data' which relates to output formatting, not input parameters. It doesn't explain what the four required parameters (translation, book, chapter, verse) mean or provide examples. With 0% coverage and 4 parameters, the description fails to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return a specific verse' specifies the verb (return) and resource (verse). It distinguishes from sibling tools like bible_chapter (returns a chapter) and bible_search (searches across verses), but doesn't explicitly differentiate from bible_books or bible_translations. The addition of 'with red letter data if available' provides useful context about optional formatting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like bible_chapter (for full chapters) or bible_search (for searching content), nor does it specify prerequisites or constraints beyond what's implied by the parameters. The agent must infer usage from the tool name and parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!