Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and a simple input schema, the description is incomplete. It doesn't cover the VNC context, behavioral traits (e.g., how typing is simulated), error handling, or differentiation from siblings. For a tool that interacts with a remote system, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.