Skip to main content
Glama

Intercept Electron App

intercept_electron

Launch Electron applications while intercepting their HTTP(S) traffic for debugging and inspection purposes.

Instructions

Launch an Electron application with all its HTTP(S) traffic intercepted. Use get_interceptor_metadata with id "electron" to list available Electron apps.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
proxyPortYesProxy port to route traffic through
pathToApplicationYesPath to the Electron application to launch

Implementation Reference

  • src/index.ts:315-327 (registration)
    Registration of the 'intercept_electron' tool in the MCP server.
    server.registerTool(
      'intercept_electron',
      {
        title: 'Intercept Electron App',
        description: 'Launch an Electron application with all its HTTP(S) traffic intercepted. Use get_interceptor_metadata with id "electron" to list available Electron apps.',
        inputSchema: z.object({
          proxyPort: z.number().describe('Proxy port to route traffic through'),
          pathToApplication: z.string().describe('Path to the Electron application to launch'),
        }),
      },
      async ({ proxyPort, pathToApplication }) =>
        jsonResult(await client.activateInterceptor('electron', proxyPort, { pathToApplication }))
    );
  • The handler implementation that makes the HTTP request to the HTTP Toolkit server to activate an interceptor.
    async activateInterceptor(
      id: string,
      proxyPort: number,
      options?: unknown
    ): Promise<{ result: { success: boolean; metadata?: unknown } }> {
      return this.request(
        'POST',
        `/interceptors/${encodeURIComponent(id)}/activate/${proxyPort}`,
        options || {}
      );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions launching and intercepting traffic, but doesn't cover critical aspects like whether this requires specific permissions, if it's destructive (e.g., modifies app behavior), rate limits, or what happens on failure. This leaves significant gaps for a tool that likely involves system-level operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences: the first states the core purpose, and the second provides a usage tip. It's front-loaded and wastes no words, though the second sentence could be more integrated into the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (launching and intercepting an app), no annotations, no output schema, and 2 parameters, the description is incomplete. It doesn't explain what the tool returns, potential side effects, or error handling, making it inadequate for safe and effective use by an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('proxyPort' and 'pathToApplication'). The description adds no additional meaning beyond what's in the schema, such as example values or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Launch an Electron application with all its HTTP(S) traffic intercepted.' It specifies the verb ('Launch'), resource ('Electron application'), and key behavior ('traffic intercepted'). However, it doesn't explicitly differentiate from siblings like 'intercept_chrome' or 'intercept_firefox' beyond mentioning Electron specifically, which is why it's not a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage guidance by referencing 'get_interceptor_metadata with id "electron" to list available Electron apps,' suggesting this tool should be used after identifying apps. However, it lacks explicit when-to-use vs. alternatives (e.g., compared to 'intercept_existing_terminal' or 'intercept_fresh_terminal'), and no exclusions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fdciabdul/httptoolkit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server