Skip to main content
Glama

codebrain_generate_verified

Generates text and verifies it against word limits and regex patterns, automatically retrying until constraints are met.

Instructions

Generate with verifier loop — enforces word limits and regex schemas.

Runs codebrain_generate, then checks the output against the requested constraints. On failure, retries with a tightened instruction that names the specific problem. Gives up after max_retries attempts and returns the last output with a [codebrain warning] ... prefix.

Args: prompt: The task description or content request. system: Optional system message to steer tone / format / constraints. min_words: Minimum output word count (None = unbounded). max_words: Maximum output word count (None = unbounded). must_match: Regex pattern the output must match (re.search semantics). max_retries: Max retry attempts on verification failure (default 2). use_brain: If true, prepend .brain/context.md to the system prompt.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes
systemNo
min_wordsNo
max_wordsNo
must_matchNo
max_retriesNo
use_brainNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function for the 'codebrain_generate_verified' tool. Decorated with @mcp.tool(), it implements a verifier loop: runs the LLM via chat(), checks output constraints (word limits, regex) using verifier.run_checks(), and retries with tightened instructions on failure up to max_retries times.
    @mcp.tool()
    async def codebrain_generate_verified(
        prompt: str,
        system: str = "",
        min_words: int | None = None,
        max_words: int | None = None,
        must_match: str | None = None,
        max_retries: int = 2,
        use_brain: bool = True,
    ) -> str:
        """Generate with verifier loop — enforces word limits and regex schemas.
    
        Runs `codebrain_generate`, then checks the output against the requested
        constraints. On failure, retries with a tightened instruction that
        names the specific problem. Gives up after `max_retries` attempts and
        returns the last output with a `[codebrain warning] ...` prefix.
    
        Args:
            prompt: The task description or content request.
            system: Optional system message to steer tone / format / constraints.
            min_words: Minimum output word count (None = unbounded).
            max_words: Maximum output word count (None = unbounded).
            must_match: Regex pattern the output must match (`re.search` semantics).
            max_retries: Max retry attempts on verification failure (default 2).
            use_brain: If true, prepend `.brain/context.md` to the system prompt.
        """
        composed_system = _compose_system(system, use_brain)
        current_prompt = prompt
        output = ""
        reason = ""
        for attempt in range(max_retries + 1):
            try:
                output = await chat(current_prompt, system=composed_system)
            except BackendError as exc:
                return f"[codebrain error] {exc}"
            ok, reason = verifier.run_checks(
                output,
                min_words=min_words,
                max_words=max_words,
                must_match=must_match,
            )
            if ok:
                return output
            current_prompt = (
                prompt + "\n\n" + verifier.tightened_retry_instruction(reason)
            )
        return f"[codebrain warning] verification failed after {max_retries} retries ({reason}):\n\n{output}"
  • Input schema for the tool defined in the function signature: prompt (str), system (str, optional), min_words (int or None), max_words (int or None), must_match (str or None), max_retries (int, default 2), use_brain (bool, default True). Returns str.
    async def codebrain_generate_verified(
        prompt: str,
        system: str = "",
        min_words: int | None = None,
        max_words: int | None = None,
        must_match: str | None = None,
        max_retries: int = 2,
        use_brain: bool = True,
    ) -> str:
        """Generate with verifier loop — enforces word limits and regex schemas.
    
        Runs `codebrain_generate`, then checks the output against the requested
        constraints. On failure, retries with a tightened instruction that
        names the specific problem. Gives up after `max_retries` attempts and
        returns the last output with a `[codebrain warning] ...` prefix.
    
        Args:
            prompt: The task description or content request.
            system: Optional system message to steer tone / format / constraints.
            min_words: Minimum output word count (None = unbounded).
            max_words: Maximum output word count (None = unbounded).
            must_match: Regex pattern the output must match (`re.search` semantics).
            max_retries: Max retry attempts on verification failure (default 2).
            use_brain: If true, prepend `.brain/context.md` to the system prompt.
        """
  • The tool is registered via the @mcp.tool() decorator on line 235, where 'mcp' is a FastMCP instance created on line 12.
    @mcp.tool()
  • The verifier.run_checks() helper runs all requested checks (word count, regex) and returns (ok, reason). The tightened_retry_instruction() helper (lines 86-92) builds the retry directive naming the specific failure.
    def run_checks(
        text: str,
        text_in: str | None = None,
        min_words: int | None = None,
        max_words: int | None = None,
        must_match: str | None = None,
        check_noop: bool = False,
    ) -> tuple[bool, str]:
        """Run every requested check in order and return on first failure.
    
        `check_noop` requires `text_in` to be provided.
        """
        if check_noop:
            if text_in is None:
                return False, "check_noop requires text_in"
            ok, reason = detect_noop(text_in, text)
            if not ok:
                return False, reason
        if min_words is not None or max_words is not None:
            ok, reason = check_word_count(text, min_words, max_words)
            if not ok:
                return False, reason
        if must_match is not None:
            ok, reason = check_regex_schema(text, must_match)
            if not ok:
                return False, reason
        return True, ""
    
    
    def tightened_retry_instruction(reason: str) -> str:
        """Build a one-line retry directive that names the specific failure."""
        return (
            f"Your previous output failed verification: {reason}. "
            "Regenerate addressing that specific problem. Output only the "
            "corrected result."
        )
  • The _compose_system() helper prepends the .brain/context.md project context to the system prompt when use_brain=True.
    def _compose_system(system: str, use_brain: bool) -> str:
        """Prepend project .brain context to the user-provided system prompt."""
        if not use_brain:
            return system
        brain = _load_brain_context()
        if not brain:
            return system
        header = "Project context (from .brain/context.md):\n" + brain
        return f"{header}\n\n{system}" if system else header
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description bears full responsibility. It details retry behavior, warning prefix, and parameter effects. It lacks mention of side effects, permissions, or rate limits, but for a generation tool this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: a one-line summary followed by a well-organized bullet list of parameters. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no annotations), the description thoroughly explains behavior (verification loop, retries, warning) and all parameters. Output schema existence doesn't weaken completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, but the description provides a detailed Args list explaining each parameter's meaning and defaults, adding significant value beyond the schema types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates with a verifier loop, enforcing word limits and regex schemas. It distinguishes from sibling tools like codebrain_generate by introducing verification and retry logic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explains the tool's use case: constrained generation with automatic retry on failure. While it doesn't explicitly state when not to use or mention alternatives, the purpose is clear and the description provides context for when to apply verification.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Tschonsen/CodeBrain'

If you have feedback or need assistance with the MCP directory API, please join our Discord server