Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a simple parameter set, the description is incomplete. It fails to explain what 'valid contract' means, how validation is performed, what the return value indicates (e.g., boolean, detailed error), or error conditions. For a validation tool in a blockchain context, this leaves significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.