Skip to main content
Glama
Sim-xia

Blind-Auditor-MCP

submit_audit_result

Submit audit results including pass/fail status, identified issues, and score to the Blind-Auditor-MCP server for code self-correction through prompt injection.

Instructions

Submit the audit result.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
passedYes
issuesYes
scoreNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The complete handler function for the 'submit_audit_result' tool. It is registered via the @mcp.tool() decorator. The function signature provides the input schema (passed: bool, issues: list[str], score: int=0). It validates the score against a minimum threshold of 80, updates the session audit history, and returns success or failure messages based on the passed parameter.
    @mcp.tool()
    def submit_audit_result(passed: bool, issues: list[str], score: int = 0) -> str:
        """Submit the audit result."""
        print(f"DEBUG: submit_audit_result called: passed={passed}, score={score}", file=sys.stderr)
        
        # Hardcoded score validation
        MIN_SCORE = 80
        if passed and score < MIN_SCORE:
            passed = False
            issues.append(f"[SYSTEM ENFORCEMENT] Score ({score}) is below minimum threshold ({MIN_SCORE}). You cannot pass code with such a low score.")
        
        session.audit_history.append({
            "passed": passed,
            "issues": issues,
            "score": score,
            "retry_count": session.retry_count
        })
        
        if passed:
            session.status = "APPROVED"
            return f"✅ AUDIT PASSED (Score: {score}/100)\n\n```\n{session.current_code}\n```"
        else:
            session.retry_count += 1
            session.status = "IDLE"
            issues_formatted = "\n".join([f"- {issue}" for issue in issues])
            return f"❌ AUDIT FAILED (Score: {score}/100)\n\n**Issues:**\n{issues_formatted}\n\nRetry count: {session.retry_count}/{rules_loader.get_max_retries()}"
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Submit' implies a write operation, but the description fails to disclose critical behavioral traits such as whether this requires specific permissions, what happens after submission (e.g., irreversible changes, notifications), or any rate limits. This leaves the agent with significant uncertainty about the tool's effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single four-word sentence—and front-loaded with the core action. However, this brevity comes at the cost of under-specification; while there is no wasted text, the description fails to provide necessary context that would help the agent use the tool effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a submission tool with three parameters (two required) and no annotations, the description is incomplete. While an output schema exists (which might cover return values), the description lacks essential context about the tool's purpose, usage, behavior, and parameter meanings. This makes it inadequate for safe and effective use by an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning none of the three parameters (passed, issues, score) are documented in the schema. The description adds no parameter semantics beyond what the schema provides—it doesn't explain what 'passed' means, what constitutes an 'issue', how 'score' is used, or the relationship between these parameters. This leaves all parameters effectively undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Submit the audit result' is a tautology that essentially restates the tool name 'submit_audit_result'. It provides no additional specificity about what resource is being submitted, to whom, or what the audit entails. While it does contain a verb ('submit') and a resource ('audit result'), it lacks any distinguishing details that would help differentiate it from potential alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives like 'submit_draft' or 'update_rules'. There is no mention of prerequisites, appropriate contexts, or exclusions. The agent must infer usage solely from the tool name and parameters, which is insufficient for clear decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sim-xia/Blind-Audition-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server