Skip to main content
Glama
bswa006

AI Agent Template MCP Server

by bswa006

generate_tests_for_coverage

Generate intelligent tests to achieve 80%+ code coverage for target files using Jest, Vitest, or Mocha frameworks. Includes options for edge cases and accessibility testing.

Instructions

Generate intelligent tests to achieve 80%+ coverage

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
targetFileYesFile to generate tests for
testFrameworkNoTest framework to use
coverageTargetNoTarget coverage percentage (default: 80)
includeEdgeCasesNoInclude edge case tests
includeAccessibilityNoInclude accessibility tests for components

Implementation Reference

  • Main handler function that implements the tool logic: reads and analyzes the target file using Babel AST, generates test code for specified framework, estimates coverage, and provides suggestions to reach target coverage.
    export async function generateTestsForCoverage(
      config: TestGenerationConfig
    ): Promise<TestGenerationResult> {
      const result: TestGenerationResult = {
        success: false,
        testCode: '',
        coverage: {
          estimated: 0,
          functions: 0,
          branches: 0,
          lines: 0,
        },
        testCategories: {
          unit: 0,
          integration: 0,
          edgeCases: 0,
          errorHandling: 0,
          accessibility: 0,
        },
        suggestions: [],
      };
    
      try {
        // Read target file
        if (!existsSync(config.targetFile)) {
          throw new Error(`Target file not found: ${config.targetFile}`);
        }
    
        const fileContent = readFileSync(config.targetFile, 'utf-8');
        const fileType = detectFileType(fileContent);
        
        // Parse the file to understand structure
        const ast = parseCode(fileContent);
        const analysis = analyzeCode(ast, fileType);
        
        // Generate tests based on analysis
        const testFramework = config.testFramework || detectTestFramework();
        const tests = generateTests(analysis, testFramework, config);
        
        // Calculate coverage estimation
        const coverage = estimateCoverage(analysis, tests);
        
        // Generate suggestions for reaching target coverage
        const suggestions = generateSuggestions(
          coverage,
          config.coverageTarget || 80,
          analysis
        );
        
        result.success = true;
        result.testCode = tests.code;
        result.coverage = coverage;
        result.testCategories = tests.categories;
        result.suggestions = suggestions;
    
      } catch (error) {
        result.success = false;
        result.suggestions = [`Error generating tests: ${error}`];
      }
    
      return result;
    }
  • Input schema definition for the tool, specifying parameters like targetFile, testFramework, coverageTarget, etc.
      name: 'generate_tests_for_coverage',
      description: 'Generate intelligent tests to achieve 80%+ coverage',
      inputSchema: {
        type: 'object',
        properties: {
          targetFile: {
            type: 'string',
            description: 'File to generate tests for',
          },
          testFramework: {
            type: 'string',
            enum: ['jest', 'vitest', 'mocha'],
            description: 'Test framework to use',
          },
          coverageTarget: {
            type: 'number',
            description: 'Target coverage percentage (default: 80)',
          },
          includeEdgeCases: {
            type: 'boolean',
            description: 'Include edge case tests',
          },
          includeAccessibility: {
            type: 'boolean',
            description: 'Include accessibility tests for components',
          },
        },
        required: ['targetFile'],
      },
    },
  • Tool registration and dispatch in the main switch statement: parses arguments with Zod schema matching the tool schema and calls the handler function.
    case 'generate_tests_for_coverage': {
      const params = z.object({
        targetFile: z.string(),
        testFramework: z.enum(['jest', 'vitest', 'mocha']).optional(),
        coverageTarget: z.number().optional(),
        includeEdgeCases: z.boolean().optional(),
        includeAccessibility: z.boolean().optional(),
      }).parse(args);
      
      const result = await generateTestsForCoverage(params);
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • Imports the tool definitions array used for listing tools.
    import { toolDefinitions } from './tool-definitions.js';
  • Imports the handler function for use in tool dispatch.
    import { generateTestsForCoverage } from './testing/test-generator.js';
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It hints at 'intelligent' test generation but fails to specify key traits: whether this is a read-only analysis or a write operation that creates files, what permissions are needed, how it handles errors, or if there are rate limits. For a tool with 5 parameters and no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('generate intelligent tests') and goal ('achieve 80%+ coverage'). There is no wasted wording or redundancy, making it easy to parse quickly while conveying essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no annotations, no output schema), the description is incomplete. It doesn't address behavioral aspects like mutation effects, error handling, or output format, nor does it provide usage context relative to siblings. For a tool that likely generates or modifies test files, this leaves critical gaps for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds no additional meaning beyond what's in the schema—it doesn't explain parameter interactions, default behaviors beyond the schema's 'coverageTarget' default, or how 'intelligent' generation relates to the parameters. This meets the baseline for high schema coverage but doesn't enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('generate intelligent tests') and the goal ('to achieve 80%+ coverage'), providing a specific verb and resource. However, it doesn't explicitly differentiate this tool from its many siblings (e.g., 'validate_generated_code' or 'detect_existing_patterns'), which could involve testing-related functions, leaving room for ambiguity about its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing codebase), exclusions (e.g., not for non-code files), or compare it to sibling tools like 'validate_generated_code', which might overlap in testing contexts. This lack of context makes it unclear when this is the appropriate choice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bswa006/mcp-context-manager'

If you have feedback or need assistance with the MCP directory API, please join our Discord server