Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Tools

Functions exposed to the LLM to take actions

NameDescription
ingest_document

Load, segment, and index a document for search.

Supports txt, md, pdf, epub, and html formats. Automatically detects chapters and sections.

Args: path: Absolute path to the document file. title: Optional title for the document (defaults to filename). chunk_size: Target size in words for each chunk (default: 2000). overlap: Number of words to overlap between chunks (default: 100). force: Force re-indexing even if document already exists.

Returns: Ingestion result with document ID and structure.

search_segment

Search for relevant segments using TF-IDF.

Returns snippets with matched terms highlighted.

Args: query: Search query (keywords or phrases). document_id: Optional: limit search to a specific document. segment_id: Optional: search within a specific segment only. limit: Maximum number of results to return (default: 5). context_words: Number of words around matches in snippets (default: 50).

Returns: Search results with scores and snippets.

get_metadata

Get metadata, structure, and statistics for a document or segment.

Includes top terms by TF-IDF.

Args: document_id: ID of the document to get metadata for. segment_id: ID of the segment to get metadata for. include_structure: Include document structure in response. top_terms: Number of top terms to return (default: 10).

Returns: Metadata including structure and top terms.

list_documents

List all indexed documents with their metadata.

Args: limit: Maximum number of documents to return (default: 20). offset: Number of documents to skip (for pagination).

Returns: List of documents with metadata.

compare_segments

Compare two segments to find shared themes, unique terms, and similarity.

Useful for understanding relationships between chapters.

Args: segment_id_a: ID of the first segment to compare. segment_id_b: ID of the second segment to compare. find_bridges: Find intermediate segments that connect the two. max_bridges: Maximum number of bridge segments to return.

Returns: Comparison result with similarity and themes.

get_source_capabilities

CRITICAL: Analyze what a document CAN and CANNOT support.

Returns detected languages, whether original Hebrew/Greek/Aramaic is present, textual variant availability, and epistemological limitations. MUST be called before making claims about morphology, etymology, or textual criticism.

Args: document_id: ID of the document to analyze.

Returns: Source capabilities analysis.

validate_claim

Check if a specific claim can be grounded in the source document.

Returns whether the claim requires capabilities the document lacks. Use this BEFORE making scholarly assertions.

Args: document_id: ID of the document to validate against. claim: The claim or assertion to validate.

Returns: Claim validation result.

get_epistemological_report

Generate complete epistemological analysis before making scholarly claims.

Returns: language hard stops, canonical frame detection, auto-critique, confidence decay calculation, and recommendations. Use BEFORE any complex textual analysis.

Args: document_id: ID of the document to analyze. query: The research question or claim being investigated.

Returns: Epistemological report.

check_language_operation

Check if a specific linguistic operation is allowed.

Use before performing morphological, etymological, or text-critical analysis.

Args: document_id: ID of the document. operation: The operation to check (e.g., "root analysis"). language: The language involved (hebrew, greek, aramaic).

Returns: Language operation permission result.

detect_semantic_frames

Detect conceptual frameworks in a text segment.

Identifies causal, revelational, performative, and invocative frames. Prevents reductive analysis by identifying non-causal categories.

Args: segment_id: ID of the segment to analyze. query: The research question being investigated.

Returns: Semantic frame detection result.

analyze_subdetermination

Analyze whether textual ambiguity is total indeterminacy or directed subdetermination.

Returns what the text CLOSES (excludes) vs. what it LEAVES OPEN, and detects asymmetric relations.

Args: segment_id: ID of the segment to analyze.

Returns: Subdetermination analysis result.

detect_performatives

Detect performative speech acts where divine speech IS the creative act.

Identifies "And God said... and it was so" patterns that resist causal analysis.

Args: segment_id: ID of the segment to analyze.

Returns: Performative detection result.

check_anachronisms

Check if a research question imports post-biblical conceptual categories.

Detects Aristotelian causes, Neoplatonic emanation, Trinitarian doctrine.

Args: query: The research question or claim to check.

Returns: Anachronism check result.

audit_cognitive_operations

CRITICAL: Run before ANY response. Validates cognitive constraint compliance.

Detects unauthorized operations (synthesis, explanation, causality inference). Returns compliance status and safe fallback if needed.

Args: document_id: ID of the document being queried. query: The user query to analyze. planned_output: The planned response text to validate.

Returns: Cognitive audit result.

detect_inference_violations

Scan text for inferential connectors and prohibited abstract nouns.

Detects: therefore, thus, implies, means that, ontology, mechanism, structure. These signal unauthorized cognitive operations.

Args: text: The text to scan for inference violations.

Returns: Inference violation detection result.

get_permitted_operations

Get permitted cognitive operations based on text genre.

Different genres allow different operations (narrative, poetry, wisdom, etc.).

Args: segment_id: ID of the segment to check.

Returns: Permitted operations result.

generate_safe_fallback

Generate a safe, compliant response when query requires unauthorized operations.

Use when audit_cognitive_operations returns violations.

Args: question_type: Type of unauthorized operation (synthesis, explanation, etc.). document_title: Title of the document for the fallback message.

Returns: Safe fallback response.

build_document_vocabulary

Build closed vocabulary from document.

Creates lexicon of all tokens. Required before using validate_output_vocabulary.

Args: document_id: ID of the document to build vocabulary from.

Returns: Vocabulary build result.

validate_output_vocabulary

Check if output uses only vocabulary present in the source document.

Detects terms imported from outside the text.

Args: document_id: ID of the document. output: The output text to validate against document vocabulary.

Returns: Vocabulary validation result.

validate_literal_quote

Verify that a quoted string exists EXACTLY in a segment or document.

Use BEFORE claiming any text appears in the source. Returns confidence: "textual" (exact match), "partial" (similar), "not_found". Prevents pattern completion hallucination.

Args: quote: The exact quote to validate. document_id: Optional: document to search. segment_id: Optional: specific segment to check. fuzzy_threshold: Similarity threshold for partial matches (0-1).

Returns: Literal quote validation result.

validate_proximity

Check if two segments are adjacent (within allowed distance).

Use to enforce "same verse or verse+1" constraints. Prevents narrative jump violations.

Args: base_segment_id: The anchor segment ID. target_segment_id: The segment ID being referenced. max_distance: Maximum allowed segment distance (0 = same, 1 = adjacent).

Returns: Proximity validation result.

get_adjacent_segments

Get list of segment IDs within proximity constraint.

Use for extraction queries that require adjacency.

Args: base_segment_id: The anchor segment ID. max_distance: Maximum distance from base (default: 1).

Returns: Adjacent segment IDs.

identify_speaker

Identify who is speaking in a text segment.

Returns speaker name, confidence level, and evidence. Domain-agnostic: works for any document type.

Args: segment_id: ID of the segment to analyze. priority_patterns: Optional: Speaker names to prioritize. exclude_patterns: Optional: Speaker patterns to flag as ambiguous. expected_speaker: Optional: verify this specific speaker.

Returns: Speaker identification result.

detect_pattern_contamination

Detect when output may be completing a known pattern not in source.

Domain-agnostic: works for any genre (religious, fairy tales, legal, etc.). Agent provides patterns dynamically based on document genre.

Args: claimed_output: What the agent claims is in the text. segment_id: ID of the segment to check against. patterns: Optional: Pattern definitions with trigger/expectedCompletion.

Returns: Pattern contamination detection result.

validate_extraction_schema

Validate that extraction output follows a strict schema.

Detects parenthetical comments, notes sections, evaluative language. Use when user requests pure data extraction.

Args: output: The extraction output to validate. fields: Expected field names in output. allow_commentary: Whether commentary is allowed (default: False).

Returns: Extraction schema validation result.

detect_narrative_voice

CRITICAL: Detect the narrative voice type of a text segment.

Distinguishes:

  • primary_narration ("The Lord did X") = action executed in-scene

  • human_to_divine ("You led them...") = human prayer/praise, RETROSPECTIVE

  • divine_direct_speech ("I am the Lord") = God speaking

  • human_about_divine ("The Lord is my shepherd") = descriptive

Use BEFORE extracting "divine actions" to avoid confusing retrospective prayer with primary divine agency.

Args: segment_id: ID of the segment to analyze. domain_vocabulary: Optional DomainVocabulary for enhanced detection.

Returns: Narrative voice detection result.

validate_agency_execution

Validates whether a divine action is EXECUTED in-scene vs merely REFERENCED.

Key distinction:

  • EXECUTED = "Fire came up from the rock" (Judges 6:21)

  • REFERENCED = "You led them with a pillar" (Nehemiah 9:12) - retrospective

The second describes same action but as human memory, NOT primary execution.

Args: segment_id: ID of the segment to analyze. divine_agent_patterns: Optional: Patterns to identify divine agent.

Returns: Agency execution validation result.

detect_text_genre

Detect text genre to apply correct extraction rules.

Genres: historical_narrative, narrative_poetry, prayer_praise, recapitulation, prophetic.

DOMAIN-AGNOSTIC: Uses structural patterns by default. Provide domainVocabulary for domain-specific enhanced detection.

Args: segment_id: ID of the segment to analyze. domain_vocabulary: Optional DomainVocabulary for enhanced detection.

Returns: Text genre detection result.

detect_divine_agency_without_speech

CRITICAL: Detect when an agent acts WITHOUT speaking.

DOMAIN-AGNOSTIC: Agent provides agentPatterns dynamically. Separates SPEECH verbs (said, spoke) from ACTION verbs (caused, made, remembered).

Examples:

  • Biblical ["God", "Lord"] finds "God remembered Noah"

  • Legal ["the Court"] finds "the Court ruled"

Args: segment_id: ID of the segment to analyze. agent_patterns: Agent names to search for. domain_vocabulary: Optional DomainVocabulary for genre detection.

Returns: Divine agency without speech detection result.

detect_weak_quantifiers

Detects weak quantifiers that require statistical evidence.

Quantifiers like "frequently", "typically", "always", "never" imply statistical claims that should not be made without counting evidence.

Returns recommendation: "allow", "require_count", or "block". Use on agent output BEFORE returning to user.

Args: text: Text to analyze (typically agent output).

Returns: Weak quantifier detection result.

validate_existential_response

CRITICAL: Validates response to existential question ("Does X exist in text?").

VALID: "YES" + textual evidence, OR "NO" + explicit denial. INVALID: meta-discourse, hedging, questions, introducing categories not asked.

Use AFTER generating response to existential questions to catch evasion.

Args: response: The agent response to validate.

Returns: Existential response validation result.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Rixmerz/bigcontext_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server