Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of backtesting tools and the lack of annotations and output schema, the description is incomplete. It does not explain what 'results' include (e.g., summary statistics, charts), how it relates to sibling tools, or behavioral aspects like idempotency. For a tool that likely returns critical financial data, this leaves significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.