Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mathematical calculation tool with 2 parameters, 0% schema coverage, no annotations, and no output schema, the description is insufficient. It doesn't explain what the tool returns, how it handles edge cases, mathematical constraints, or error conditions. The context signals indicate this is a non-trivial mathematical operation that needs more explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.