Skip to main content
Glama
temurkhan13

openclaw-output-vetter-mcp

find_swallowed_exceptions

Scan Python source code to detect try/except patterns that silently swallow exceptions or substitute fake data, flagging each with line number, severity, and code excerpt.

Instructions

Scan Python source code for try/except patterns that swallow errors or substitute fabricated mock data — the silent-fake-success pattern from the r/ClaudeAI thread. Flags pass-only handlers, mock-substitution returns, silent log-and-return, and bare excepts. Each finding includes a line number + severity + code excerpt.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesPython source code to scan
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It clearly states the tool scans code and outputs findings with line number, severity, and code excerpt, indicating a read-only, non-destructive operation. It could mention no side effects explicitly, but is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words: the first explains purpose and patterns, the second describes output. Every part adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with full schema coverage and no output schema, the description sufficiently covers what the tool does and what it returns. It could explicitly state the return format (e.g., list of objects), but it is implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds context about the patterns detected, which helps understand how the 'code' parameter is used, but does not add format or constraints beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Scan' and resource 'Python source code', and explicitly lists the patterns it detects (pass-only handlers, mock-substitution returns, etc.), clearly differentiating it from sibling tools that deal with transcripts or verification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for detecting specific code smells but does not specify when to use this tool versus alternatives or mention any exclusions. Context from sibling names shows distinct use cases, but explicit guidance is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/temurkhan13/openclaw-output-vetter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server