Skip to main content
Glama

check

Verify planned actions against stored corrections to identify necessary adjustments before proceeding.

Instructions

Pre-flight check: see if any corrections apply before taking an action.

Call this before doing something to see if there's a stored correction
that should change your approach.

Args:
    planned_action: Describe what you're about to do.
    namespace: Filter by namespace.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
planned_actionYes
namespaceNodefault

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The 'check' tool implementation in 'neveronce/server.py'. It uses the memory instance to find corrections for a given action.
    @mcp.tool()
    def check(planned_action: str, namespace: str = "default") -> str:
        """Pre-flight check: see if any corrections apply before taking an action.
    
        Call this before doing something to see if there's a stored correction
        that should change your approach.
    
        Args:
            planned_action: Describe what you're about to do.
            namespace: Filter by namespace.
        """
        mem = _get_mem()
        matches = mem.check(planned_action, namespace=namespace)
        if not matches:
            return "No corrections apply. Proceed."
    
        lines = ["CORRECTIONS APPLY — review before proceeding:\n"]
        for m in matches:
            lines.append(f"  #{m['id']}: {m['content']}")
            if m.get("context"):
                lines.append(f"    Context: {m['context']}")
        return "\n".join(lines)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that corrections are 'stored' (implying persistent state/read operation) and that results may 'change your approach' (affecting downstream logic). However, it omits details about the output format, what happens when no corrections exist, or whether this operation has side effects like logging the check.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently organized with the core purpose front-loaded ('Pre-flight check'). The 'Args:' section, while slightly informal, effectively segments parameter documentation. No filler sentences—every line contributes either purpose definition or parameter explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately skips return value details. However, for a tool interacting with a correction/memory system (evidenced by siblings: store, forget, recall, correct), the description inadequately explains what constitutes a 'correction' or how the namespace isolation works. Adequate for basic invocation but missing domain context that would help agents formulate effective planned_action strings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by documenting both parameters inline: 'planned_action: Describe what you're about to do' and 'namespace: Filter by namespace'. While 'namespace' documentation is somewhat circular, it provides essential semantic meaning for both parameters that the JSON schema completely lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states this is a 'Pre-flight check' to 'see if any corrections apply before taking an action', providing a specific verb (check) and resource (corrections). It distinguishes from sibling 'correct' (which likely applies corrections) by positioning itself as a read-only validation step. However, it assumes familiarity with what 'corrections' means in this system without defining the domain concept.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Call this before doing something to see if there's a stored correction that should change your approach.' This provides clear temporal context (pre-action) and purpose (to validate approach). Lacks explicit 'when not to use' or direct reference to siblings like 'correct' or 'store', but the guidance is actionable and specific.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/WeberG619/neveronce'

If you have feedback or need assistance with the MCP directory API, please join our Discord server