Urantia Papers
Server Details
Free, open MCP server for The Urantia Papers. 197 papers, 14,500+ paragraphs, 4,400+ entities.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- urantia-hub/urantia-dev-api
- GitHub Stars
- 0
- Server Listing
- urantia-papers
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 13 of 13 tools scored.
Each tool targets a distinct resource (audio, entities, papers, paragraphs, search, toc) and action. There is no overlap in functionality, and descriptions clearly differentiate them.
All tools follow a consistent `domain.action` naming pattern (e.g., audio.get, entities.list, papers.get). The convention is uniform across the entire set.
13 tools cover the Urantia Book domain comprehensively without excess: listing, retrieval, search (full-text and semantic), entity exploration, audio, and table of contents. The scope matches the server's purpose.
The tool surface is complete for studying the Urantia Book: full CRUD-like operations for papers and paragraphs, entity browsing, multiple search modalities, random access, and structural navigation. No obvious gaps.
Available Tools
13 toolsaudio.getGet Paragraph AudioARead-onlyIdempotentInspect
Get the audio file URL for a specific paragraph. Accepts any paragraph reference format (globalId "1:2.0.1", standardReferenceId "2:0.1", or paperSectionParagraphId "2.0.1").
| Name | Required | Description | Default |
|---|---|---|---|
| paragraph_ref | Yes | Paragraph reference. Example: "2:0.1" |
Output Schema
| Name | Required | Description |
|---|---|---|
| audio | Yes | |
| paragraphId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and idempotent behavior. The description adds that it returns a URL and accepts multiple input formats, clarifying the tool's behavior beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, direct and front-loaded with the core action. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple retrieval nature, presence of an output schema, and clear input format explanation, the description is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema covers the parameter with an example, the description lists three distinct accepted formats with specific examples, adding meaningful context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The verb 'Get' and resource 'audio file URL for a specific paragraph' are clearly stated. It uniquely identifies the tool among siblings, as no other audio tool exists.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description details the paragraph reference formats accepted, but does not provide explicit guidance on when to use this tool versus alternatives or exclusionary conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entities.getGet EntityARead-onlyIdempotentInspect
Get detailed information about a specific entity by its slug ID. Returns name, type, aliases, description, related entities, and citation count.
| Name | Required | Description | Default |
|---|---|---|---|
| entity_id | Yes | Entity slug ID. Example: "god-the-father" |
Output Schema
| Name | Required | Description |
|---|---|---|
| entity | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds that it returns specific fields, but no behavioral traits beyond what annotations provide. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 20 words, directly stating purpose and return fields. No redundancy; all information is pertinent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get-by-ID tool with an output schema and clear annotations, the description is complete. It explains what it does and what it returns without extraneous detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter, which includes a description and example. The description does not add additional semantics beyond the schema's 'entity_id' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'get detailed information about a specific entity by its slug ID', with explicit verb, resource, and identifier. Differentiates from sibling 'entities.list' which presumably returns multiple entities without detail.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'entities.list'. The purpose is clear, but absence of usage context leaves the agent to infer applicability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entities.listList EntitiesARead-onlyIdempotentInspect
Browse the entity catalog: beings, places, orders, races, religions, and concepts mentioned in the Urantia Book. Supports filtering by type and searching by name.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Alias for `query` (REST compatibility). | |
| page | No | Page number (0-indexed) | |
| type | No | Filter by entity type | |
| limit | No | Results per page (1-100) | |
| query | No | Search entities by name or alias |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | |
| meta | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. Description adds context about the Urantia Book catalog and filtering/searching capabilities. No contradictions. Could mention pagination but not required.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with purpose. No redundant words. Every part is informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Output schema exists, so return values are documented. Description covers the tool's purpose and key features (filtering, searching). Not overly detailed but sufficient for a listing tool with good annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions. The description adds only general context (browse, filter, search) without enriching individual parameter meaning beyond schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists entities (beings, places, etc.) and supports filtering by type and searching by name. Distinguishes from siblings like entities.get (single entity) and entities.paragraphs (paragraphs for an entity).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for browsing the entity catalog, but does not explicitly contrast with sibling tools or state when not to use it. Context signals from sibling names provide differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entities.paragraphsGet Entity ParagraphsARead-onlyIdempotentInspect
Get all paragraphs that mention a specific entity, ordered by position in the text. Useful for studying everything said about a particular being, place, or concept.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (0-indexed) | |
| limit | No | Results per page (1-100) | |
| entity_id | Yes | Entity slug ID. Example: "god-the-father" |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | |
| meta | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the description's role in behavioral transparency is reduced. The description adds that results are ordered by position, which is useful but not a major behavioral disclosure beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the main action and purpose, and contains no extraneous information. Every word is justified.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description does not need to explain return values. It covers the core purpose and ordering. It could optionally mention pagination, but the parameter schema already covers that. Overall, it is adequate for a simple retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description does not add significant meaning beyond the schema; it only restates that the tool gets paragraphs for an entity. A baseline of 3 is appropriate given high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves all paragraphs mentioning a specific entity, ordered by position. It uses a specific verb ('Get') and resource ('paragraphs that mention a specific entity'), and the mention of 'ordered by position in the text' provides additional detail that distinguishes it from sibling tools like paragraphs.get or paragraphs.random.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates the tool is 'useful for studying everything said about a particular being, place, or concept,' which gives context for when to use it. However, it does not explicitly exclude scenarios or contrast with related sibling tools like paragraphs.context or search.fulltext, which could help an agent decide more precisely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
papers.getGet PaperARead-onlyIdempotentInspect
Get a single paper with all its paragraphs. Paper IDs range from 0 (Foreword) to 196. Optionally include entity mentions.
| Name | Required | Description | Default |
|---|---|---|---|
| paper_id | Yes | Paper ID (0-196). Example: '1' | |
| include_entities | No | Include entity mentions in each paragraph |
Output Schema
| Name | Required | Description |
|---|---|---|
| paper | Yes | |
| paragraphs | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, destructiveHint, idempotentHint. The description adds that the tool returns paragraphs and optional entity mentions, providing specific behavioral context beyond annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core action, and every word adds value. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple get-by-ID function, the description covers the purpose, parameter constraint (ID range), and optional feature. The output schema exists to document return values, so no further detail needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters. The description slightly adds by specifying the ID range (0-196) and the optionality of entities, but this mostly repeats schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get', the resource 'single paper', and includes details about paragraphs and optional entities. It differentiates from siblings like papers.list (list all) and papers.sections by specifying it returns a single paper with all paragraphs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly guides usage: use when needing a single paper by ID. It does not explicitly mention when to avoid or compare with alternatives, but the context of sibling tools makes the use case clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
papers.listList PapersARead-onlyIdempotentInspect
List all 197 papers in the Urantia Book with their metadata (id, title, partId, labels). Use toc.get for a hierarchical view instead.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| papers | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. Description adds value by stating the tool returns all 197 papers with specific metadata fields, which is concrete behavioral detail beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and an output schema (implied), the description fully covers what the tool does, the data returned, and how it differs from related tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% coverage, so no parameter explanation needed. Baseline score of 4 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'List' and resource 'papers', specifies the exact count (197) and returned fields (id, title, partId, labels). It also distinguishes from sibling 'toc.get' by noting the alternative for hierarchical view.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when not to use this tool by suggesting 'toc.get' for hierarchical view, providing clear guidance on selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
papers.sectionsGet Paper SectionsARead-onlyIdempotentInspect
Get all sections within a paper, ordered by section number. Useful for understanding paper structure before reading specific sections.
| Name | Required | Description | Default |
|---|---|---|---|
| paper_id | Yes | Paper ID (0-196). Example: '1' |
Output Schema
| Name | Required | Description |
|---|---|---|
| sections | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds behavioral detail (ordered output, structural use) without contradiction. It does not elaborate on error handling or edge cases, but with strong annotations this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short sentences with no redundancy. The first sentence states the action and ordering; the second provides usage context. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no nested objects, output schema exists), the description adequately covers purpose, ordering, and usage context. An output schema is provided, so return values do not need description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the sole parameter, and the schema description ('Paper ID (0-196). Example: 1') is clear. The description adds no extra semantic information about parameters, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'all sections within a paper' with ordering by section number. It uses a specific verb and resource, distinct from sibling tools like 'papers.get' (paper metadata) and 'papers.list' (list papers).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context ('useful for understanding paper structure before reading specific sections') that guides the agent on when to use this tool. However, it does not explicitly state when not to use it or name alternatives, slightly reducing clarity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
paragraphs.contextGet Paragraph with ContextARead-onlyIdempotentInspect
Get a paragraph with surrounding context (N paragraphs before and after within the same paper). Useful for understanding passages in context.
| Name | Required | Description | Default |
|---|---|---|---|
| ref | Yes | Paragraph reference. Examples: "1:2.0.1", "2:0.1", "2.0.1" | |
| window | No | Number of paragraphs before and after (1-10, default 2) | |
| include_entities | No | Include entity mentions |
Output Schema
| Name | Required | Description |
|---|---|---|
| after | Yes | |
| before | Yes | |
| target | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the description does not need to repeat that. It adds value by clarifying the scope ('within the same paper') and the context window behavior, effectively supplementing the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, first stating the action and then the purpose. It is front-loaded with the key verb and resource, and every part is essential with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has an output schema (not shown but indicated), the description does not need to detail return values. It covers the core functionality, context window, and usage hint. However, it could be slightly more explicit about the output format or that it returns a list of paragraphs, but overall it is complete enough for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides full descriptions for all three parameters (100% coverage). The description adds minimal extra meaning, only reiterating the 'N paragraphs before and after' aspect of the window parameter. This is sufficient given the schema's clarity, but the description does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a paragraph with surrounding context, specifying the verb 'get', the resource 'paragraph with context', and the purpose 'useful for understanding passages in context'. It distinguishes from sibling tools like paragraphs.get, which likely returns a single paragraph without context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions getting context 'within the same paper' and the parameter 'N paragraphs before and after', giving implicit usage context. However, it does not explicitly state when to use this tool versus alternatives like paragraphs.get or search tools, which would be helpful.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
paragraphs.getGet ParagraphARead-onlyIdempotentInspect
Look up a specific paragraph by reference. Supports three formats: globalId ("1:2.0.1"), standardReferenceId ("2:0.1"), or paperSectionParagraphId ("2.0.1"). The format is auto-detected.
| Name | Required | Description | Default |
|---|---|---|---|
| ref | Yes | Paragraph reference in any format. Examples: "1:2.0.1", "2:0.1", "2.0.1" | |
| include_entities | No | Include entity mentions |
Output Schema
| Name | Required | Description |
|---|---|---|
| paragraph | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds valuable behavioral context about auto-detection of three reference formats, but lacks details on error handling if a reference is not found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with the core action, no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with 2 parameters and an output schema, the description covers the essential behavior of lookup by reference with format auto-detection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning beyond the input schema by explaining the three reference formats and auto-detection, complementing the schema's examples and default values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'look up a specific paragraph by reference' with specific supported formats, distinguishing it from sibling tools like paragraphs.context or paragraphs.random.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a reference is available but does not explicitly guide when to use this tool vs alternatives like paragraphs.context for surrounding context or paragraphs.random for arbitrary selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
paragraphs.randomGet Random ParagraphARead-onlyInspect
Get a random paragraph from the Urantia Book. Great for daily quotes, exploration, or discovering new passages.
| Name | Required | Description | Default |
|---|---|---|---|
| include_entities | No | Include entity mentions |
Output Schema
| Name | Required | Description |
|---|---|---|
| paragraph | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the tool's safety profile is clear. The description adds context about the source (Urantia Book) and use cases but doesn't elaborate on behavior like pagination or rate limits. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and contains no extraneous words. It earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter, output schema exists), the description sufficiently covers the source, use cases, and context. No obvious gaps for a random-retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter, include_entities, is fully described in the input schema (100% coverage). The tool description adds no additional meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The name 'Get Random Paragraph' and description specify the verb 'Get' and resource 'random paragraph from the Urantia Book'. It clearly distinguishes from sibling tools like paragraphs.get (for specific paragraphs) and search.fulltext (for targeted queries).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description suggests use cases like 'daily quotes, exploration, or discovering new passages', which implies when to use. However, it does not explicitly mention when not to use or list alternatives, such as using search.semantic for topic-based queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search.fulltextFull-Text SearchARead-onlyIdempotentInspect
Full-text search across all Urantia Book paragraphs. Supports three modes: "and" (all words must appear, default), "or" (any word), "phrase" (exact phrase). Results ranked by relevance.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Alias for `query` (REST compatibility). | |
| page | No | Page number (0-indexed) | |
| type | No | Search mode: phrase, and, or | and |
| limit | No | Results per page (1-100) | |
| query | No | Search query. Example: "nature of God" | |
| part_id | No | Filter to a specific part ID (1-4) | |
| paper_id | No | Filter to a specific paper ID | |
| include_entities | No | Include entity mentions |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | |
| meta | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, destructiveHint. Description adds the modal behavior but lacks details on pagination behavior, result limits (beyond schema), or empty result handling. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. Front-loaded with purpose and followed by essential usage details. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 8 parameters and an output schema, the description sufficiently covers the core search functionality. Could mention pagination or result scoring briefly, but output schema likely handles return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, so baseline is 3. Description repeats the three modes already defined in the type parameter's enum. No additional semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly specifies full-text search across Urantia Book paragraphs with three explicit modes (and, or, phrase). It distinguishes from sibling tool 'search.semantic' implicitly by focusing on keyword matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes the three search modes and when each applies (default 'and', 'or' for any word, 'phrase' for exact). Does not explicitly contrast with alternative tools like 'search.semantic' but the context implies appropriate use for keyword-based search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search.semanticSemantic SearchARead-onlyIdempotentInspect
Search the Urantia Book using semantic similarity (vector embeddings). Returns conceptually related results even without exact keyword matches. Requires OPENAI_API_KEY.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Alias for `query` (REST compatibility). | |
| page | No | Page number (0-indexed) | |
| limit | No | Results per page (1-100) | |
| query | No | Natural language query. Example: "What is the meaning of life?" | |
| part_id | No | Filter to a specific part ID (1-4) | |
| paper_id | No | Filter to a specific paper ID | |
| include_entities | No | Include entity mentions |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | Yes | |
| meta | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive behavior. Description adds that it uses vector embeddings and requires an API key, providing useful behavioral context without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff: first states purpose and method, second states requirement. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and full parameter descriptions, the description covers the key behavioral aspect (semantic similarity) and a critical requirement (API key). Missing pagination details but schema covers that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so each parameter is already explained. Description adds no extra parameter details beyond the schema; baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it performs semantic search on the Urantia Book using vector embeddings, distinguishing it from sibling tools like search.fulltext which likely use exact keyword matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions requirement for OPENAI_API_KEY, which is critical for use. Does not directly contrast with alternatives, but the purpose implies when to use (conceptual matches vs exact).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
toc.getGet Table of ContentsARead-onlyIdempotentInspect
Get the full table of contents of the Urantia Book. Returns all 4 parts and 197 papers with their titles. This is the best starting point to understand the book structure.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| parts | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the safety profile is clear. The description adds value by specifying the exact content (parts and papers with titles) and stating it returns the full TOC, which enhances transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two sentences that convey all necessary information without any wasted words. It is well-structured and front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there are no parameters and an output schema exists, the description fully covers what the tool does and when to use it. It is complete for a simple retrieval tool with rich annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and description coverage is 100%. The description does not need to add parameter details, and the baseline of 4 applies as the schema handles everything.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the full table of contents of the Urantia Book, detailing that it returns all 4 parts and 197 papers with titles. This is specific and distinguishes it from sibling tools like search or entity retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates this is 'the best starting point to understand the book structure,' providing clear context for when to use it. It does not explicitly state when not to use or mention alternatives, but the context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.