BEREAN.AI
Server Details
Biblical and theological research MCP server. Ask pastoral questions, run academic-grade queries across 2M+ scholarly passages (lexicons, commentaries, church fathers, Dead Sea Scrolls, Talmud), or search raw sources directly. Free, no API key required.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 3 of 3 tools scored.
Each tool serves a distinct purpose: ask_question for pastoral answers, scholar_query for academic research with citations, and search_sources for raw database retrieval. No overlap exists.
Two tools (ask_question, search_sources) follow a clear verb_noun pattern, while scholar_query is a noun_noun compound, introducing a minor inconsistency. Overall, names are still clear and readable.
With 3 tools, the server is slightly small but well-scoped for biblical research. Each tool earns its place, covering pastoral, academic, and raw search needs.
The tool set covers pastoral Q&A, academic research, and raw source retrieval. While additional tools like verse lookups could be included, the current set handles core workflows without obvious gaps.
Available Tools
3 toolsask_questionAInspect
Ask a biblical or theological question and get a concise pastoral answer (250-400 words) grounded in Reformed theology with Scripture references. Best for practical faith questions, doctrine overviews, and life application.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | The biblical or theological question to answer |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description adds behavioral context: answer length (250-400 words), theological perspective (Reformed), and inclusion of Scripture references. This goes beyond the input schema, though it omits potential constraints like response time or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The first sentence states purpose and output, the second provides usage guidance. Every element contributes value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one param, no output schema), the description fully covers purpose, output characteristics (length, style, references), and appropriate use cases. Sibling tools help complete the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the schema already fully describes the parameter. The description adds no additional meaning beyond the schema's 'The biblical or theological question to answer'. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies a clear verb ('Ask') and resource ('biblical or theological question') with a defined output ('concise pastoral answer grounded in Reformed theology'). It distinguishes from siblings by specifying 'pastoral' and 'Reformed theology', implying different focus than 'scholar_query' or 'search_sources'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use: 'Best for practical faith questions, doctrine overviews, and life application.' This provides clear context but does not explicitly exclude other uses or mention alternatives, though siblings exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholar_queryAInspect
Academic biblical research query powered by two-stage retrieval (dense vector search + cross-encoder reranking) across 2M+ indexed scholarly passages. Searches Greek/Hebrew lexicons, Bible translations, morphological data, commentaries from 15+ traditions (Reformed, Catholic, Orthodox, Jewish, etc.), the Babylonian Talmud, Mishnah, Aquinas, Josephus, church fathers, Dead Sea Scrolls, and creeds/confessions. Returns detailed academic answers with source citations.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | The academic biblical or theological research question |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral transparency burden. It details the RAG search mechanism, the scope of sources (2M+ indexed passages, multiple traditions), and the return format (detailed academic answers with citations). It does not mention limitations, rate limits, or destructive actions, but the read-only nature is implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, efficiently front-loaded with the tool's core purpose. Every sentence adds value, listing key capabilities and sources without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of the tool (2M+ passages, multiple sources, RAG), the description provides a solid overview of capabilities and output. It lacks details on pagination, rate limits, or specific limitations, but for a query tool, the coverage is adequate and complete enough for an AI agent to understand its use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'question' described. The tool description adds context about the type of questions (academic biblical research), but does not significantly augment the schema's description. Baseline 3 is appropriate as the schema already provides clear semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs an academic biblical research query using RAG search across a vast indexed collection. It specifies the verb 'query' and the resource 'scholarly passages', and distinguishes itself from siblings like 'ask_question' by focusing on academic biblical research with multiple sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for academic biblical research questions, but does not explicitly state when to use it vs. alternatives like 'ask_question' or 'search_sources'. It provides a comprehensive list of sources, offering clear context, but lacks direct exclusionary guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_sourcesAInspect
Search the BEREAN.AI knowledge base directly for relevant passages without generating an AI answer. Uses dense vector retrieval with cross-encoder reranking on interpretive sources. Returns raw source passages from lexicons, commentaries, Bible texts, etc. Useful for getting primary source data.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The search query to find relevant biblical/theological passages |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries full burden. It describes the read-only nature and return type but omits details like result limits, pagination, or authentication. Acceptable for a simple search but not fully transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences front-loading purpose and return type. No filler, every sentence adds value. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 1-parameter schema and no output schema, the description adequately explains what the tool does and returns. Minor gaps (e.g., result count) but sufficient for a straightforward search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter ('query'). The schema already describes it as 'search query to find relevant biblical/theological passages'. The description adds minimal value, repeating the concept of raw passages. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'search', the resource 'BEREAN.AI vector database', and specifies it returns 'raw source passages' without generating an AI answer. This distinguishes it from siblings ask_question and scholar_query.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies when not to use (when AI answer is desired), and states usefulness for primary source data. Lacks explicit alternative names or conditions, but provides sufficient context for informed choice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!