Skip to main content
Glama

observe

Log recurring behavioral patterns like tool sequences, user preferences, or fixes to build confidence over time and identify reliable rules for AI agents.

Instructions

Record one observation of a behavioral pattern; increments its confidence counter.

    Use this to log anything you want the agent to learn over time: a tool
    sequence that worked, a user preference, a recurring fix, or a combo
    of tools used together. Call once per occurrence — repeated calls on
    the same pattern raise its confidence (1=new, 5=mature, 10=rule).

    Do NOT use this for one-off notes; those belong in regular memory.
    This tool is for patterns that may recur and become reliable.

    Idempotent on pattern key: same pattern string merges into one entry.

    Args:
        pattern: Pattern key following the convention prefix:body.
            Examples: "seq:lint->fix->lint" (tool sequence),
            "pref:style=black" (user preference),
            "fix:missing-import" (recurring fix),
            "combo:pytest+coverage" (things used together).
        category: Pattern type. One of: "sequence", "preference",
            "fix_pattern", "combo". Defaults to "sequence".
        source: Originating tool/agent name (e.g. "claude-code",
            "cursor"). Empty string means unknown. Useful for filtering.
        project: Project fingerprint. Empty string auto-detects from cwd
            (recommended). Pass explicitly only for cross-project imports.
        explain: One-line human-readable rationale for why this pattern
            matters. Surfaces in suggestions and CLAUDE.md exports.

    Returns:
        Dict with keys: "pattern", "confidence" (int), "level"
        ("seedling" | "mature" | "rule"), "created" (bool — true on
        first observation).
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
patternYes
categoryNosequence
sourceNo
projectNo
explainNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits: idempotence ('same pattern string merges'), confidence progression (1=new, 5=mature, 10=rule), call frequency guidance ('Call once per occurrence'), and the tool's learning purpose. It doesn't mention rate limits or authentication needs, but covers core behavior thoroughly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose, usage guidelines, behavioral notes, then detailed parameter documentation. Some sentences could be tightened (e.g., 'Call once per occurrence — repeated calls on the same pattern raise its confidence' could be more concise), but overall information density is high.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 5-parameter tool with no annotations and 0% schema coverage, the description provides comprehensive context: clear purpose, usage guidelines, behavioral transparency, full parameter semantics, and even documents the return format despite having an output schema. Nothing essential appears missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate fully - and it does. Each of the 5 parameters gets detailed explanation with examples, conventions, defaults, and usage guidance. The 'pattern' parameter gets particularly rich documentation with multiple example formats and prefixes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('record', 'log', 'increments confidence counter') and distinguishes it from sibling tools by specifying it's for behavioral patterns that recur (not one-off notes). It explicitly contrasts with 'regular memory' for one-off notes, showing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use ('log anything you want the agent to learn over time') and when NOT to use ('Do NOT use this for one-off notes'). It gives concrete examples of appropriate patterns and distinguishes from alternatives like 'regular memory'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yakuphanycl/instinct'

If you have feedback or need assistance with the MCP directory API, please join our Discord server