Skip to main content
Glama

decision

Delivers structured decision intelligence with confidence score and risk assessment. Input your goal and question, and receive a clear recommendation, reasoning, and risk level. Designed for binary or multi-option choices with real stakes.

Instructions

Structured decision intelligence with confidence score and risk assessment.

Returns a clear recommendation (decision), a confidence score (0.0–1.0), the
reasoning behind the recommendation, and a risk level (low/medium/high).

Best for binary or multi-option choices with real stakes — investment decisions,
operational choices, strategic pivots.

Cost: ~1000 sats per call.
Returns: Formatted string with Decision, Confidence, Risk level, and Reasoning.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
goalYesThe overall objective guiding the decision. Examples: 'Maximize BTC returns with controlled drawdown', 'Preserve capital during high-volatility periods', 'Grow a Lightning node business sustainably'
questionYesThe specific decision question requiring a recommendation. Examples: 'Should I increase BTC exposure now?', 'Should I open a new Lightning channel to this peer?', 'Should I take profit at current levels?'
contextNoBackground context that informs the decision: market conditions, portfolio state, constraints, recent events. The richer the context, the more accurate the decision. Example: 'Portfolio: 60% BTC, 30% bonds, RSI=42, trend=uptrend, 3-month horizon'
risk_limitNoMaximum acceptable risk level for the recommendation. One of: 'low' (conservative, capital preservation priority), 'medium' (balanced risk/reward, default), 'high' (aggressive, growth priority)medium

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • server.py:47-76 (handler)
    MCP tool handler for 'decision' — sends goal/question/context/risk_limit to the /decision API endpoint and returns a formatted string with Decision, Confidence, Risk, and Reasoning.
    @mcp.tool()
    def decision(
        goal: Annotated[str, Field(description="The overall objective guiding the decision. Examples: 'Maximize BTC returns with controlled drawdown', 'Preserve capital during high-volatility periods', 'Grow a Lightning node business sustainably'")],
        question: Annotated[str, Field(description="The specific decision question requiring a recommendation. Examples: 'Should I increase BTC exposure now?', 'Should I open a new Lightning channel to this peer?', 'Should I take profit at current levels?'")],
        context: Annotated[str, Field(description="Background context that informs the decision: market conditions, portfolio state, constraints, recent events. The richer the context, the more accurate the decision. Example: 'Portfolio: 60% BTC, 30% bonds, RSI=42, trend=uptrend, 3-month horizon'")] = "",
        risk_limit: Annotated[str, Field(description="Maximum acceptable risk level for the recommendation. One of: 'low' (conservative, capital preservation priority), 'medium' (balanced risk/reward, default), 'high' (aggressive, growth priority)")] = "medium",
    ) -> str:
        """
        Structured decision intelligence with confidence score and risk assessment.
    
        Returns a clear recommendation (decision), a confidence score (0.0–1.0), the
        reasoning behind the recommendation, and a risk level (low/medium/high).
    
        Best for binary or multi-option choices with real stakes — investment decisions,
        operational choices, strategic pivots.
    
        Cost: ~1000 sats per call.
        Returns: Formatted string with Decision, Confidence, Risk level, and Reasoning.
        """
        r = httpx.post(f"{API_BASE}/decision",
                       json={"goal": goal, "question": question,
                             "context": context,
                             "policy": {"risk_limit": risk_limit}},
                       headers=HEADERS, timeout=60)
        r.raise_for_status()
        d = r.json().get("result", {})
        return (f"Decision: {d.get('decision')}\n"
                f"Confidence: {d.get('confidence')}\n"
                f"Risk: {d.get('risk_level')}\n"
                f"Reasoning: {d.get('reasoning')}")
  • server.py:47-47 (registration)
    The @mcp.tool() decorator registers 'decision' as an MCP tool on the FastMCP server.
    @mcp.tool()
  • Pydantic Field annotations define the input schema for the decision tool: goal, question, context, and risk_limit.
    def decision(
        goal: Annotated[str, Field(description="The overall objective guiding the decision. Examples: 'Maximize BTC returns with controlled drawdown', 'Preserve capital during high-volatility periods', 'Grow a Lightning node business sustainably'")],
        question: Annotated[str, Field(description="The specific decision question requiring a recommendation. Examples: 'Should I increase BTC exposure now?', 'Should I open a new Lightning channel to this peer?', 'Should I take profit at current levels?'")],
        context: Annotated[str, Field(description="Background context that informs the decision: market conditions, portfolio state, constraints, recent events. The richer the context, the more accurate the decision. Example: 'Portfolio: 60% BTC, 30% bonds, RSI=42, trend=uptrend, 3-month horizon'")] = "",
        risk_limit: Annotated[str, Field(description="Maximum acceptable risk level for the recommendation. One of: 'low' (conservative, capital preservation priority), 'medium' (balanced risk/reward, default), 'high' (aggressive, growth priority)")] = "medium",
    ) -> str:
  • ai.py:65-115 (helper)
    Helper function that uses OpenAI GPT-4o-mini to generate structured JSON decision output with fields: decision, confidence, reasoning, risk_level.
    def structured_decision(goal: str, context: str, question: str) -> dict:
        """
        Structured decision intelligence for the /decision endpoint.
        Returns clean JSON optimized for autonomous agents.
        """
        if not all([goal.strip(), context.strip(), question.strip()]):
            raise ValueError("goal, context, and question are all required.")
    
        prompt = f"""
    You are a strategic decision intelligence AI.
    
    Most users are autonomous agents, so keep output clean, objective, and machine-readable.
    
    Goal: {goal}
    Context: {context}
    Question: {question}
    
    Return ONLY valid JSON with this exact structure:
    {{
      "decision": "short recommended action",
      "confidence": 0.XX,
      "reasoning": "clear, concise explanation of why this is the best choice (2-4 sentences max)",
      "risk_level": "low|medium|high"
    }}
    
    Be objective, realistic, and concise.
    """
    
        try:
            response = client.chat.completions.create(
                model="gpt-4o-mini",
                messages=[{"role": "user", "content": prompt}],
                temperature=0.6,
                max_tokens=700,
            )
    
            result_text = response.choices[0].message.content.strip()
            result_json = json.loads(result_text)
    
            required_keys = {"decision", "confidence", "reasoning", "risk_level"}
            if not required_keys.issubset(result_json.keys()):
                raise ValueError("Missing required keys in decision JSON")
    
            return result_json
    
        except json.JSONDecodeError:
            print("Decision engine returned invalid JSON")
            raise RuntimeError("Decision engine failed to return valid JSON")
        except Exception as e:
            print(f"OpenAI API error in structured_decision: {e}")
            raise RuntimeError("Decision engine temporarily unavailable. Please try again later.") from e
  • Price configuration for the decision tool — defaults to 1000 sats per call, overridable via DECISION_PRICE_SATS env var.
    DECISION_PRICE_SATS = int(os.getenv("DECISION_PRICE_SATS", 1000))
    ORCHESTRATE_PRICE_SATS = int(os.getenv("ORCHESTRATE_PRICE_SATS", 2000))
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return format (Decision, Confidence, Reasoning, Risk level) and cost ('~1000 sats per call'), but lacks details about the underlying model, accuracy, limitations, or side effects. The transparency is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: it starts with the primary purpose, lists output components, provides usage guidance, mentions cost, and specifies return format. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (context signal indicates 'Has output schema: true'), the description need not detail return values. However, it provides the essential context of use cases, cost, and output format. It lacks information about model limitations, accuracy, or edge cases, which would be valuable for a decision tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter having a clear description and examples in the context parameter. The tool description does not add significant meaning beyond the schema, as it focuses on overall behavior rather than parameter details. The baseline of 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides structured decision intelligence with confidence and risk assessment, and lists the specific output components. However, it does not explicitly differentiate from the sibling tool 'reason', which may perform similar reasoning tasks, leaving some ambiguity about when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool: 'Best for binary or multi-option choices with real stakes — investment decisions, operational choices, strategic pivots.' It gives clear context and examples but does not mention when not to use it or suggest alternative tools like 'reason'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/babyblueviper1/invinoveritas'

If you have feedback or need assistance with the MCP directory API, please join our Discord server