Skip to main content
Glama
dragons96

MCP-Undetected-Chromedriver

by dragons96

browser_evalute

Execute JavaScript expressions directly in the browser console for web scraping, testing, or automation, bypassing anti-bot detection systems to handle complex protections.

Instructions

Evaluate a JavaScript expression in the browser console

Args:
    script: The JavaScript expression to evaluate - required

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scriptYes

Implementation Reference

  • Full handler implementation for the 'browser_evalute' tool. Includes registration via @mcp.tool(), input schema in function signature and docstring, validation, and execution of JavaScript using driver.execute_script(script). Uses shared helpers for browser management and response formatting.
    @mcp.tool()
    async def browser_evalute(
            script: str,
    ):
        """Evaluate a JavaScript expression in the browser console
    
        Args:
            script: The JavaScript expression to evaluate - required
        """
        assert script, "Script is required"
    
        async def evaluate_handler(driver: uc.Chrome):
            return await create_success_response(
                [
                    "Executed script:",
                    f"{script}",
                    "Result:",
                    f"{driver.execute_script(script)}",
                ]
            )
    
        return await tool.safe_execute(
            ToolContext(webdriver=await ensure_browser()), evaluate_handler
        )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('evaluate a JavaScript expression') but doesn't describe what happens during evaluation (e.g., execution context, error handling, return values, or side effects like page modifications). This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly state the purpose and parameter. The structure is clear and front-loaded, though the formatting with 'Args:' could be slightly more polished for optimal readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of evaluating JavaScript in a browser (a potentially powerful and risky operation), the description is incomplete. With no annotations, no output schema, and minimal parameter details, it fails to address critical aspects like security implications, execution context, or what the evaluation returns, making it inadequate for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds basic semantics for the single parameter ('script: The JavaScript expression to evaluate - required'), which is helpful since schema description coverage is 0%. However, it doesn't provide details on script format, constraints, or examples, offering only minimal value beyond the schema's structural definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('evaluate') and resource ('JavaScript expression in the browser console'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like browser_click or browser_navigate, which perform different browser interactions rather than JavaScript evaluation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context for JavaScript evaluation, or how it differs from other browser tools like browser_get_visible_html or browser_select, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dragons96/mcp-undetected-chromedriver'

If you have feedback or need assistance with the MCP directory API, please join our Discord server