Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a mathematical function with no annotations, no output schema, and 0% schema description coverage, the description is incomplete. It doesn't address important contextual elements like return values (integer likely, but possibly with overflow), error conditions, computational limitations for large n, or mathematical definition (including 0! = 1). For even a simple tool, more context would help the agent use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.