Skip to main content
Glama

eval_renderer

Execute JavaScript code within Electron app pages to interact with web content, extract data, or manipulate DOM elements using provided arguments.

Instructions

Evaluate JavaScript in the renderer (page) context. Pass a FUNCTION BODY — use return to yield a value. Example: return document.title. Supports async/await. Same contract as eval_main. Pass an arbitrary JSON-serializable arg to be available as the arg variable inside the body.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
jsYesFunction body. Use `return` to yield a value. `arg` is available as a local.
argNoArbitrary JSON-serializable value exposed as `arg` inside the body. Objects, arrays, primitives, and null all work.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: supports async/await, requires a function body with return, and exposes an arg variable. However, it misses details like error handling, execution time limits, or security implications, which are important for a JavaScript evaluation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: it starts with the core purpose, provides usage instructions, includes an example, and notes key features (async/await support, contract similarity). Every sentence adds value without redundancy, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (evaluating JavaScript in a renderer context) and no annotations or output schema, the description is moderately complete. It covers the basic operation and parameters but lacks details on return values, error cases, or performance considerations, which could be crucial for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (js and arg) thoroughly. The description adds some semantic context by explaining that js is a 'FUNCTION BODY' and arg is 'JSON-serializable' and exposed as a local variable, but this mostly reinforces the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Evaluate JavaScript in the renderer (page) context.' It specifies the verb ('evaluate'), resource ('JavaScript'), and context ('renderer (page) context'), distinguishing it from sibling tools like eval_main by explicitly mentioning the execution context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context by stating 'Pass a FUNCTION BODY — use `return` to yield a value' and 'Same contract as eval_main,' which implies when to use it (for renderer context evaluation) and references an alternative (eval_main). However, it lacks explicit exclusions or detailed comparisons with other siblings like check or wait_for.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mesomya/electron-driver'

If you have feedback or need assistance with the MCP directory API, please join our Discord server