Skip to main content
Glama

watch_signal

Automatically stops simulation when a RTL signal matches a specified condition, enabling efficient debugging without manual probing.

Instructions

Set a watchpoint to stop simulation when a signal matches a condition.

The simulation will automatically stop at the exact clock edge where the condition becomes true. Much more efficient than manual probing.

Args: signal: Full hierarchical signal path (e.g. "top.dut.r_state[3:0]"). op: Comparison operator ("==", "!=", ">", "<", ">=", "<="). value: Target value in Verilog format (e.g. "8'h10", "4'b1010").

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
signalYes
opNo==
valueNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The watch_signal function executes the tool logic by setting a watchpoint via the bridge.
    async def watch_signal(signal: str, op: str = "==", value: str = "") -> str:
        """Set a watchpoint to stop simulation when a signal matches a condition.
    
        The simulation will automatically stop at the exact clock edge where
        the condition becomes true. Much more efficient than manual probing.
    
        Args:
            signal: Full hierarchical signal path (e.g. "top.dut.r_state[3:0]").
            op: Comparison operator ("==", "!=", ">", "<", ">=", "<=").
            value: Target value in Verilog format (e.g. "8'h10", "4'b1010").
        """
        bridge = _get_bridge()
        result = await bridge.execute(f"__WATCH__ {signal} {op} {value}")
        return f"Watchpoint set: {result}"
  • The watch_signal tool is registered using the @mcp.tool() decorator.
    @mcp.tool()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses critical timing behavior ('stop at the exact clock edge where the condition becomes true'), but omits safety/prerequisite context such as whether the watchpoint persists across simulation restarts, requires debugger mode, or how it interacts with multiple concurrent watchpoints (sibling `watch_clear` exists).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by behavioral detail and value proposition. The Args section is structured and contains zero redundant text. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with an output schema, the description adequately covers inputs and behavior. It appropriately omits return value details (covered by output schema). Minor deduction for not mentioning prerequisites (e.g., simulator connection state) or lifecycle management (relationship to `watch_clear`).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage (properties lack descriptions), the description fully compensates via the Args section. It defines 'signal' with hierarchical path examples, 'op' with explicit valid operators, and 'value' with Verilog format examples—adding essential semantic meaning absent from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Set[s] a watchpoint to stop simulation when a signal matches a condition'—a specific verb with clear resource and outcome. It implicitly distinguishes from sibling `set_breakpoint` by using 'watchpoint' and 'signal' (vs. code breakpoints), and from `get_signal_value` by emphasizing the automatic stopping behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied guidance by stating it is 'Much more efficient than manual probing,' suggesting when to prefer this tool. However, it lacks explicit when-to-use/when-not-to-use rules regarding siblings like `set_breakpoint` (code vs. signal breakpoints) or `watch_clear` (management of multiple watchpoints).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hslee-cmyk/xcelium-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server