Skip to main content
Glama
loda-lang

LODA API MCP Server

Official
by loda-lang

eval_program

Execute LODA assembly programs to compute mathematical integer sequences from OEIS, generating specified terms with optional offset for sequence analysis.

Instructions

Evaluate a LODA program and return sequence terms.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesLODA program code
oNoOffset (optional)
tNoNumber of terms

Implementation Reference

  • The primary handler function for the 'eval_program' tool. It validates the input arguments, calls the LODA API client to evaluate the program, formats the result or error, and returns it in the MCP response format.
    private async handleEvalProgram(args: { code: string; t?: number; o?: number }) {
      const { code, t, o } = args;
      if (!code || typeof code !== 'string') {
        throw new McpError(ErrorCode.InvalidParams, "code is required");
      }
      const result = await this.apiClient.evalProgram(code, t, o);
      return {
        content: [
          {
            type: "text",
            text:
              result.status === "success"
                ? `Result: ${result.terms.join(', ')}`
                : `Error: ${result.message}${result.terms && result.terms.length ? `\nPartial result: ${result.terms.join(', ')}` : ''}`
          }
        ],
        ...result
      };
    }
  • Input schema definition for the 'eval_program' tool, specifying the required 'code' parameter and optional 't' (terms) and 'o' (offset) parameters with validation rules.
    inputSchema: {
      type: "object",
      properties: {
        code: { type: "string", description: "LODA program code in plain text format." },
        t: { type: "number", description: "Number of terms to compute" , minimum: 1, maximum: 10000 },
        o: { type: "number", description: "The starting index (offset) for evaluating the sequence program. Overrides #offset directive or defaults to 0." }
      },
      required: ["code"],
      additionalProperties: false
    }
  • src/index.ts:415-416 (registration)
    Switch statement case that registers and routes incoming calls to the 'eval_program' tool handler within the CallToolRequestSchema handler.
    case "eval_program":
      return this.handleEvalProgram(safeArgs as { code: string; t?: number; o?: number });
  • Supporting API client method in LODAApiClient that performs the actual HTTP POST request to the LODA API endpoint /programs/eval to evaluate the provided program code.
    async evalProgram(code: string, t?: number, o?: number): Promise<Result> {
      const params = new URLSearchParams();
      if (t !== undefined) params.append('t', String(t));
      if (o !== undefined) params.append('o', String(o));
      return this.makeRequest(`/programs/eval${params.size ? '?' + params.toString() : ''}`, {
        method: 'POST',
        headers: { 'Content-Type': 'text/plain' },
        body: code,
      });
    }
  • src/index.ts:282-295 (registration)
    Tool registration object in the ListTools response, defining the name, description, and input schema for 'eval_program'.
    {
      name: "eval_program",
      description: "Evaluate a LODA program to generate the corresponding integer sequence. The request body should contain the program code in plain text format. Optionally specify the number of terms and offset.",
      inputSchema: {
        type: "object",
        properties: {
          code: { type: "string", description: "LODA program code in plain text format." },
          t: { type: "number", description: "Number of terms to compute" , minimum: 1, maximum: 10000 },
          o: { type: "number", description: "The starting index (offset) for evaluating the sequence program. Overrides #offset directive or defaults to 0." }
        },
        required: ["code"],
        additionalProperties: false
      }
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions evaluation and returning terms but lacks details on performance (e.g., computational limits, timeouts), error handling, or output format. This is a significant gap for a tool that likely involves computation, making it inadequate for safe and effective use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It front-loads the core purpose efficiently, making it easy for an agent to parse quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, and the tool's likely computational nature (evaluating code), the description is insufficient. It doesn't address key aspects like what 'sequence terms' entail, potential limitations (e.g., the t parameter's max of 10000), or error cases, leaving the agent with incomplete context for reliable use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the parameters (code, o, t). The description adds no additional semantic context beyond implying that 'code' is evaluated to produce terms, which the schema already covers. This meets the baseline for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Evaluate') and resource ('a LODA program') with the outcome ('return sequence terms'), which is specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_sequence' or 'search_sequences', which might also retrieve sequence terms, so it falls short of a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_sequence' and 'search_sequences' that might retrieve sequence data, there's no indication of context, prerequisites, or exclusions, leaving the agent to guess based on the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/loda-lang/loda-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server