Skip to main content
Glama

context_loop

Run comprehensive context health checks to maintain conversation integrity, detect contradictions, extract key facts, and receive clear directives for next actions.

Instructions

[ORCHESTRATOR — CALL THIS FIRST] CALL THIS TOOL every 2-3 turns and at the start of ANY task. It is the single most important tool — it replaces calling recap, conflict, ambiguity, entropy, grounding, drift, depth, and discovery tools individually.

What it does: Runs ALL context health checks in one call. Auto-extracts facts from conversation, detects contradictions, checks answer quality, and tells you exactly what to do next.

Returns a 'directive' object with:

  • action: proceed | clarify | reset | abstain | deepen | verify

  • instruction: Plain English telling you what to do

  • contextHealth: 0-1 score

  • autoExtractedFacts: Key facts pulled from conversation

  • suggestedNextTools: What tools to call next

  • constraints: Machine-readable rules you must follow

ESSENTIAL for: research tasks, multi-step workflows, long conversations, preserving context across turns, knowledge management, and any task requiring memory or fact-checking.

Minimal call: { "messages": [{"role":"user","content":"","turn":1}] } — most fields have smart defaults.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdNodefault
messagesNoRecent conversation messages. Include at least the last 2-3 user/assistant exchanges. Example: [{role:'user', content:'explain X', turn:1}, {role:'assistant', content:'X is...', turn:2}]. If empty, the loop runs with reduced context.
currentInputNoThe current user message or task description. Auto-inferred from last user message in messages array if omitted.
claimNoA specific assertion or answer to fact-check for confidence evaluation
discoveryQueryNoWhat capability do you need? e.g. 'store research findings' or 'compress reasoning chain'
lookbackTurnsNoHow many turns to analyze (use 15-20 for research or long conversations)
entropyThresholdNoEntropy spike detection threshold (0-1)
abstentionThresholdNoAbstention confidence threshold (0-1)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it 'runs ALL context health checks,' 'auto-extracts facts,' 'detects contradictions,' 'checks answer quality,' and 'tells you exactly what to do next.' It also details the return structure ('directive' object with specific fields) and provides a minimal call example. However, it lacks information on potential side effects, error handling, or performance characteristics like rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with critical usage instructions, but it is verbose with some redundancy (e.g., repeating the tool's importance). Sentences like 'It is the single most important tool' and 'ESSENTIAL for:' could be more concise. While most content is valuable, the structure could be tighter to improve readability without losing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, no output schema), the description is fairly complete. It explains the tool's purpose, usage, behavior, and return structure in detail. However, it lacks an output schema, so the description must fully describe return values, which it does with the 'directive' object fields. Gaps include no error handling details and limited parameter semantics, but overall, it provides sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is high (88%), so the baseline is 3. The description adds minimal parameter semantics beyond the schema: it mentions 'most fields have smart defaults' and provides a minimal call example for 'messages.' However, it does not explain the purpose or interaction of parameters like 'sessionId,' 'claim,' or 'discoveryQuery,' nor does it clarify how parameters like 'currentInput' are 'auto-inferred.' The description compensates somewhat but not significantly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('runs ALL context health checks in one call') and resources ('auto-extracts facts from conversation, detects contradictions, checks answer quality'). It explicitly distinguishes this tool from its siblings by stating it 'replaces calling recap, conflict, ambiguity, entropy, grounding, drift, depth, and discovery tools individually,' making the differentiation unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'CALL THIS TOOL every 2-3 turns and at the start of ANY task' and 'ESSENTIAL for: research tasks, multi-step workflows, long conversations, preserving context across turns, knowledge management, and any task requiring memory or fact-checking.' It also implicitly suggests when not to use it (for simpler tasks not requiring these features) and positions it as a replacement for multiple sibling tools, offering clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/XJTLUmedia/Context-First-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server