Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no annotations, no output schema, and 0 parameters, the description is minimal. While it states the purpose, it lacks completeness by not providing behavioral context (e.g., what the check entails, return format, or error handling). For a status-checking tool, more details would help the agent understand how to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.