mobile_click_on_screen_at_coordinates
Simulate screen taps at specific coordinates for mobile automation testing and interaction. Use with element detection tools to locate precise positions.
Instructions
Click on the screen at given x,y coordinates. If clicking on an element, use the list_elements_on_screen tool to find the coordinates.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| device | Yes | The device identifier to use. Use mobile_list_available_devices to find which devices are available to you. | |
| x | Yes | The x coordinate to click on the screen, in pixels | |
| y | Yes | The y coordinate to click on the screen, in pixels |
Implementation Reference
- src/server.ts:263-275 (registration)Registers the MCP tool 'mobile_click_on_screen_at_coordinates' with Zod input schema for x and y coordinates (numbers in pixels). The handler requires a selected robot/device and delegates the tap action to the platform-specific Robot implementation, returning a confirmation message.tool( "mobile_click_on_screen_at_coordinates", "Click on the screen at given x,y coordinates. If clicking on an element, use the list_elements_on_screen tool to find the coordinates.", { x: z.number().describe("The x coordinate to click on the screen, in pixels"), y: z.number().describe("The y coordinate to click on the screen, in pixels"), }, async ({ x, y }) => { requireRobot(); await robot!.tap(x, y); return `Clicked on screen at coordinates: ${x}, ${y}`; } );
- src/robot.ts:104-105 (handler)Interface definition for the tap method in the Robot class, which all platform implementations (Android, iOS, Simulator) must provide. This is the core abstraction used by the tool handler.*/ tap(x: number, y: number): Promise<void>;
- src/android.ts:304-306 (handler)Android-specific handler implementation using ADB shell 'input tap' command with the provided x,y coordinates.public async tap(x: number, y: number): Promise<void> { this.adb("shell", "input", "tap", `${x}`, `${y}`); }
- src/ios.ts:165-168 (handler)iOS physical device handler: delegates tap to WebDriverAgent after ensuring tunnel and WDA are running.public async tap(x: number, y: number): Promise<void> { const wda = await this.wda(); await wda.tap(x, y); }
- src/webdriver-agent.ts:148-173 (handler)Core tap implementation for iOS/Simulator via WebDriverAgent: sends pointer actions (move to x,y, down, pause 100ms, up) to WDA /actions endpoint within a session.public async tap(x: number, y: number) { await this.withinSession(async sessionUrl => { const url = `${sessionUrl}/actions`; await fetch(url, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ actions: [ { type: "pointer", id: "finger1", parameters: { pointerType: "touch" }, actions: [ { type: "pointerMove", duration: 0, x, y }, { type: "pointerDown", button: 0 }, { type: "pause", duration: 100 }, { type: "pointerUp", button: 0 } ] } ] }), }); }); }