Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
understand_questionB

Produce a protocol shell to decompose a user question.

    Args:
        question: The raw user ask to unpack.
        context: Optional background knowledge or situational frame.
        constraints: Explicit limits or success criteria.

    Returns:
        A structured prompt guiding the model to restate intent, surface
        constraints, and prepare clarifying questions before acting.
    
verify_logicA

Generate a verification protocol for a reasoning trace.

    Args:
        claim: The headline answer or assertion to validate.
        reasoning_trace: The supporting chain-of-thought or proof steps.
        constraints: Optional guardrails (requirements, risk limits).

    Returns:
        Structured prompt that audits assumptions, inference steps, and
        evidence, then proposes patches for any defects.
    
backtrackingA

Produce a recursive backtracking scaffold for error correction.

    Args:
        objective: Overall goal to satisfy.
        failed_step: The step or subgoal that failed.
        trace: Optional reasoning trace leading to the failure.
        constraints: Guardrails or requirements to respect.

    Returns:
        Structured prompt that rewinds to last stable state, explores
        alternatives, and proposes a patched plan.
    
symbolic_abstractA

Convert a concrete expression into abstract variables for reasoning.

    Args:
        expression: The raw text or equation to abstract.
        mapping_hint: Optional guidance for token-to-symbol mapping.
        goal: Optional downstream task (e.g., simplify, prove, generalize).

    Returns:
        Structured prompt that maps tokens to symbols, restates the problem
        abstractly, and provides a reversible mapping table.
    
design_context_architectureB
Architects a custom context system based on a high-level goal (The Architect).
Returns a blueprint of Sutra components (Molecules, Cells, Organs, Thinking Models).

Use this when the user wants to build a persistent agent or complex workflow
rather than solving a single immediate task.

Args:
    goal: The user's objective (e.g., "Build a writing assistant that learns my style").
    constraints: Optional limits (e.g., "Must be lightweight").
get_technique_guideA
Returns a guide to available Context Engineering techniques (The Librarian).
Use this to discover the best tool for a given task.

Args:
    category: Filter by 'reasoning', 'workflow', 'code', 'project', or 'all'.
analyze_task_complexityC
Analyzes a task to recommend the most efficient tool (The Router).

Args:
    task_description: The user's prompt or task.
get_protocol_shellC
Returns a Protocol Shell. Can return a specific pre-defined template or a blank shell.

Args:
    name: The name of the protocol (e.g., 'reasoning.systematic') OR a custom name.
    intent: (Optional) The intent if creating a custom shell.
get_molecular_templateA

Returns the Python function for creating molecular contexts (Module 02). Use this to programmatically construct few-shot prompts.

get_prompt_programB
Returns a functional pseudo-code prompt template (Module 07).

Args:
    program_type: The type of program ('math', 'debate').
get_cell_protocolA
Returns a cell protocol template describing memory behaviors.

Args:
    name: Identifier of the cell protocol (key_value, windowed, episodic).
get_organB
Returns an organ template for multi-agent orchestration (Layer 4).

Organs combine programs and cells into cohesive workflows for complex tasks
requiring multi-perspective analysis or collaborative reasoning.

Args:
    name: Identifier of the organ ('debate_council' for multi-perspective debate).

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
get_cot_molecules Returns Chain-of-Thought templates (Module 02).
get_reference_layers Returns the Context Engineering Layer definitions.
get_neural_fields Returns Neural Field primitives (Module 08-10).

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/4rgon4ut/sutra'

If you have feedback or need assistance with the MCP directory API, please join our Discord server