Skip to main content
Glama
pbandreddy

LoadRunner Cloud MCP Server

by pbandreddy

test_runs_getTestRunResults

Retrieve detailed performance test results from LoadRunner Cloud using a specific test run ID to analyze execution data and metrics.

Instructions

Get test run results from LoadRunner Cloud.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
runIdYesThe ID of the test run.

Implementation Reference

  • The core execution function that handles the tool logic: authenticates, constructs the API URL for test run results, makes a GET request to LoadRunner Cloud, handles responses and errors.
    const executeFunction = async ({ runId }) => {
      const baseUrl = process.env.LRC_BASE_URL;
      const tenantId = process.env.LRC_TENANT_ID;
      const token = await getAuthToken();
      try {
        // Construct the URL with query parameters
        const url = new URL(`${baseUrl}/test-runs/${runId}/results`);
        url.searchParams.append('TENANTID', tenantId);
    
        // Set up headers for the request
        const headers = {
          'Content-Type': 'application/json',
          'Authorization': `Bearer ${token}`
        };
    
        // Perform the fetch request
        const response = await fetch(url.toString(), {
          method: 'GET',
          headers
        });
    
        // Check if the response was successful
        if (!response.ok) {
          const text = await response.text();
          try {
            const errorData = JSON.parse(text);
            throw new Error(JSON.stringify(errorData));
          } catch (jsonErr) {
            // Not JSON, log the raw text
            console.error('Non-JSON error response:', text);
            throw new Error(text);
          }
        }
    
        // Parse and return the response data
        const text = await response.text();
        try {
          const data = JSON.parse(text);
          return data;
        } catch (jsonErr) {
          // Not JSON, log the raw text
          console.error('Non-JSON success response:', text);
          return { error: 'Received non-JSON response from API', raw: text };
        }
      } catch (error) {
        console.error('Error retrieving test run results:', error);
        return { error: 'An error occurred while retrieving test run results.' };
      }
    };
  • The apiTool object defining the tool's schema (parameters and required fields), name, description, and referencing the handler function. This is the primary tool definition.
    const apiTool = {
      function: executeFunction,
      definition: {
        type: 'function',
        function: {
          name: 'test_runs_getTestRunResults',
          description: 'Get test run results from LoadRunner Cloud.',
          parameters: {
            type: 'object',
            properties: {
              runId: {
                type: 'string',
                description: 'The ID of the test run.'
              }
            },
            required: ['runId']
          }
        }
      }
    };
  • lib/tools.js:7-16 (registration)
    Dynamic registration mechanism: discoverTools loads all apiTool exports from files in toolPaths, spreading them into the tools array for MCP.
    export async function discoverTools() {
      const toolPromises = toolPaths.map(async (file) => {
        const module = await import(`../tools/${file}`);
        return {
          ...module.apiTool,
          path: file,
        };
      });
      return Promise.all(toolPromises);
    }
  • tools/paths.js:5-5 (registration)
    The toolPaths array entry that includes this specific tool's file path, enabling its dynamic loading during tool discovery.
    'loadrunner-cloud/load-runner-cloud-api/test-runs-get-test-run-summary.js',
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states the action ('Get test run results') without detailing aspects like whether this is a read-only operation, if it requires authentication, rate limits, error handling, or the format of returned data. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of retrieving test run results, the lack of annotations, and no output schema, the description is incomplete. It doesn't explain what 'results' entail (e.g., performance metrics, logs), how data is structured, or any prerequisites, leaving the agent under-informed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with 'runId' clearly documented as 'The ID of the test run.' The description adds no additional parameter semantics beyond this, so it meets the baseline of 3 where the schema does the heavy lifting without extra value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'test run results from LoadRunner Cloud', which specifies what the tool does. However, it doesn't distinguish this tool from sibling tools like 'test_runs_getRecentTestRuns' or 'test_runs_getTestRunTransactions', which might also retrieve test run-related data, leaving some ambiguity about its unique scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'test_runs_getRecentTestRuns' and 'test_runs_getTestRunTransactions', there's no indication of whether this tool is for specific results, all results, or how it differs, leaving the agent to guess based on context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pbandreddy/loadrunner-cloud-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server