Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description adequately covers the basic purpose. However, it lacks details on output (e.g., what data is returned, format) and behavioral context (e.g., how results are filtered or sorted), which would be helpful for an agent to use it effectively, especially without annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.