Skip to main content
Glama

wait_until_ready

Ensures WhatsApp Web is fully authenticated and operational before proceeding with chat automation tasks.

Instructions

Wait until WhatsApp Web is fully authenticated and ready for use.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
timeout_secondsNo

Implementation Reference

  • The core logic for checking if WhatsApp Web is ready, including timeout handling and state detection.
    async def wait_until_ready(self, timeout_seconds: int | None = None) -> dict[str, Any]:
        await self.ensure_started()
        timeout = timeout_seconds or self.settings.startup_timeout_seconds
        deadline = asyncio.get_running_loop().time() + timeout
    
        while asyncio.get_running_loop().time() < deadline:
            state = await self._detect_state()
            if state == "ready":
                return {"state": state, "ready": True}
            await asyncio.sleep(2)
    
        state = await self._detect_state()
        return {
            "state": state,
            "ready": state == "ready",
            "message": "WhatsApp Web did not become ready before timeout.",
        }
  • Registration of the 'wait_until_ready' tool within the MCP server.
    "wait_until_ready": ToolDefinition(
        name="wait_until_ready",
        description="Wait until WhatsApp Web is fully authenticated and ready for use.",
        input_schema={
            "type": "object",
            "properties": {
                "timeout_seconds": {"type": "integer", "minimum": 5, "maximum": 600},
            },
            "additionalProperties": False,
        },
        handler=lambda args: self.client.wait_until_ready(args.get("timeout_seconds")),
    ),
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions waiting for authentication and readiness, which implies this is a blocking operation, but it doesn't detail what 'fully authenticated and ready' entails (e.g., UI loaded, contacts synced), potential timeouts, error handling, or side effects. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with zero wasted words—it directly states the tool's purpose without redundancy. It's appropriately sized and front-loaded, making it easy to grasp immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a blocking wait operation with no annotations, no output schema, and minimal parameter documentation), the description is incomplete. It doesn't explain what 'ready for use' means, what happens on timeout or failure, or what the agent should expect after invocation, leaving key contextual gaps for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter ('timeout_seconds') with 0% description coverage, so the schema provides no semantic context. The description doesn't mention parameters at all, but since there's only one parameter and its purpose (timeout) is inferable from the name and schema constraints, the lack of explicit parameter info is less critical. However, it doesn't add meaning beyond what's minimally deducible.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Wait until') and the target state ('WhatsApp Web is fully authenticated and ready for use'), which is specific and unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_auth_status' (which checks status rather than waits), leaving room for slight ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when WhatsApp Web needs to be authenticated and ready, but it doesn't provide explicit guidance on when to use this versus alternatives like 'get_auth_status' (for checking status without waiting) or prerequisites (e.g., after launching the browser). It offers some context but lacks clear exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ekaksher/whatsapp-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server