Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 params, no output schema), the description is minimal. However, with no annotations and no output schema, it lacks completeness in behavioral context (e.g., what the return value includes, any side effects). This leaves gaps for the agent to understand the tool fully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.