Skip to main content
Glama

midscene_aiHover

Automate web testing by hovering over elements using natural language descriptions. This tool enables precise interaction with web elements for automated testing workflows.

Instructions

Moves the mouse cursor to hover over an element identified by a natural language selector.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
locateYesUse natural language describe the element to hover over
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states the basic action. It doesn't disclose behavioral traits like whether this requires a visible element, if it waits for the element to appear, what happens on failure, or if it interacts with browser state. For a UI automation tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the core functionality without any wasted words. It's appropriately sized and front-loaded with the essential action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with 100% schema coverage but no annotations or output schema, the description provides the minimum viable explanation of purpose. However, as a UI interaction tool with potential behavioral complexity (e.g., timing, visibility requirements), it should ideally include more context about how the hover operation works.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'locate' parameter. The description adds no additional meaning beyond what's in the schema (natural language selector for element identification), meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Moves the mouse cursor to hover over') and target ('an element identified by a natural language selector'), distinguishing it from siblings like midscene_aiTap (click) or midscene_aiInput (text entry). It uses precise verb+resource language without tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for hovering over elements via natural language selectors, but provides no explicit guidance on when to use this tool versus alternatives like midscene_aiTap for clicking or midscene_aiWaitFor for waiting. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MauroCor/mcp-midscene'

If you have feedback or need assistance with the MCP directory API, please join our Discord server