Skip to main content
Glama
AerialByte

mcp-netcoredbg

by AerialByte

evaluate

Compute expression values in the current debug context to inspect variables and execution state. Specify a stack frame ID to target a specific scope.

Instructions

Evaluate an expression in the current debug context

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
expressionYesExpression to evaluate
frameIdNoStack frame ID for context (from stack_trace)
sessionIdNoSession ID (defaults to current session). Use list_sessions to see available sessions.

Implementation Reference

  • The main handler function for the 'evaluate' tool. Receives the expression, frameId, and sessionId, calls session.evaluate(), and formats the result as a text response.
    async ({ expression, frameId, sessionId }) => {
      const session = sessionManager.getSession(sessionId);
      const result = await session.evaluate(expression, frameId);
    
      const type = result.type ? ` (${result.type})` : "";
      return textResponse(`${sessionPrefix(session.id)}${expression}${type} = ${result.result}`);
    }
  • The MCP tool registration for 'evaluate'. Registers the tool with name, description, Zod schema for parameters (expression, frameId, sessionId), and the handler function.
    server.tool(
      "evaluate",
      "Evaluate an expression in the current debug context",
      {
        expression: z.string().describe("Expression to evaluate"),
        frameId: z
          .number()
          .optional()
          .describe("Stack frame ID for context (from stack_trace)"),
        sessionId: sessionIdParam,
      },
      async ({ expression, frameId, sessionId }) => {
        const session = sessionManager.getSession(sessionId);
        const result = await session.evaluate(expression, frameId);
    
        const type = result.type ? ` (${result.type})` : "";
        return textResponse(`${sessionPrefix(session.id)}${expression}${type} = ${result.result}`);
      }
    );
  • Session-level helper that wraps the DAP client evaluate call. Validates client is connected before forwarding the request.
    async evaluate(expression: string, frameId?: number): Promise<{ result: string; type?: string; variablesReference: number }> {
      const client = this.requireClient();
      return client.evaluate(expression, frameId);
    }
  • DAP client implementation that sends the actual 'evaluate' request to the debug adapter. Handles the protocol-level communication and returns the response body.
    async evaluate(
      expression: string,
      frameId?: number,
      context: "watch" | "repl" | "hover" = "repl"
    ): Promise<{ result: string; type?: string; variablesReference: number }> {
      const response = await this.sendRequest("evaluate", {
        expression,
        frameId,
        context,
      });
    
      const body = response.body as {
        result: string;
        type?: string;
        variablesReference: number;
      };
      return body;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full behavioral disclosure. It fails to mention critical traits: whether evaluation can cause side effects or mutate state, what happens on expression errors (exceptions/syntax errors), or the format of return values. 'Debug context' implies session dependency but doesn't specify behavior when no session exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at one sentence. While efficiently worded without redundancy, it is arguably too minimal given the tool has 3 parameters, no annotations, and no output schema—leaving significant behavioral gaps that conciseness cannot compensate for.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a debugging tool that executes arbitrary expressions (potentially with side effects), the description lacks essential context: output format, error handling behavior, side effect warnings, and interaction with the debug target's state. With no output schema and no annotations, the description should disclose what evaluation returns and its safety profile.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with helpful cross-references (frameId references 'stack_trace', sessionId references 'list_sessions'). The main description adds no specific parameter semantics beyond the schema, but with complete schema coverage, the baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb ('Evaluate') and resource ('expression') with context ('current debug context'). However, it doesn't differentiate from sibling tools like 'variables' (which retrieves state) or 'invoke' (which calls functions), leaving ambiguity about what distinguishes an 'evaluation' from other debug operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives. For example, it doesn't clarify when to use 'evaluate' versus 'variables' for inspecting state, or whether 'evaluate' is preferred over 'invoke' for function calls. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AerialByte/mcp-netcoredbg'

If you have feedback or need assistance with the MCP directory API, please join our Discord server