Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a simple parameter (100% schema coverage), the description is incomplete. It lacks behavioral details (e.g., what 'list' returns, default behavior vs. with 'all'), usage context, and sibling differentiation, making it inadequate for a tool that might have nuanced behavior in a Docker environment.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.