Skip to main content
Glama
codedrop-codes

Refactory

refactory_metrics

Calculate before/after metrics including Refactory Score to assess monolith decomposition quality, module health, and test preservation.

Instructions

Calculate before/after metrics and the Refactory Score (0-1). Measures health improvement, module quality, test preservation.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
originalYesPath to the original monolith
moduleDirYesDirectory containing extracted modules
testResultsNoPath to test results JSON (before/after)

Implementation Reference

  • The core handler for the refactory_metrics tool. Calculates before/after metrics: compares original vs decomposed files (lines, functions), computes module clean rate, size reduction, and the Refactory Score (0-1). Score = cleanRate * min(sizeReduction, 1.0).
    async function metrics(args) {
      const originalPath = path.resolve(args.original);
      const moduleDir = path.resolve(args.moduleDir);
    
      const originalSource = fs.readFileSync(originalPath, "utf8");
      const originalLines = originalSource.split("\n").length;
      const originalFunctions = (originalSource.match(/^(?:async\s+)?function\s+\w+/gm) || []).length;
    
      const moduleFiles = fs.readdirSync(moduleDir).filter((f) => f.endsWith(".js"));
      let totalModuleLines = 0;
      let maxModuleLines = 0;
      let modulesClean = 0;
      const moduleStats = [];
    
      for (const file of moduleFiles) {
        const content = fs.readFileSync(path.join(moduleDir, file), "utf8");
        const lines = content.split("\n").length;
        totalModuleLines += lines;
        maxModuleLines = Math.max(maxModuleLines, lines);
    
        let clean = false;
        try { require(path.join(moduleDir, file)); clean = true; modulesClean++; } catch {}
    
        moduleStats.push({ file, lines, clean });
      }
    
      const cleanRate = moduleFiles.length > 0 ? modulesClean / moduleFiles.length : 0;
      const sizeReduction = maxModuleLines < originalLines ? 1 : originalLines / maxModuleLines;
      const score = cleanRate * Math.min(sizeReduction, 1.0);
    
      return {
        original: {
          file: originalPath,
          lines: originalLines,
          functions: originalFunctions,
        },
        decomposed: {
          moduleCount: moduleFiles.length,
          totalLines: totalModuleLines,
          maxModuleLines,
          avgModuleLines: Math.round(totalModuleLines / Math.max(moduleFiles.length, 1)),
          modulesClean,
          cleanRate: Math.round(cleanRate * 100),
          modules: moduleStats,
        },
        refactoryScore: Math.round(score * 100) / 100,
        timestamp: new Date().toISOString(),
      };
    }
  • Input schema for refactory_metrics. Accepts 'original' (path to monolith), 'moduleDir' (extracted modules dir), and optional 'testResults'.
    {
      name: "refactory_metrics",
      description: "Calculate before/after metrics and the Refactory Score (0-1). Measures health improvement, module quality, test preservation.",
      inputSchema: {
        type: "object",
        properties: {
          original: { type: "string", description: "Path to the original monolith" },
          moduleDir: { type: "string", description: "Directory containing extracted modules" },
          testResults: { type: "string", description: "Path to test results JSON (before/after)" },
        },
        required: ["original", "moduleDir"],
      },
    },
  • src/server.js:36-180 (registration)
    TOOLS array registers refactory_metrics along with all other tools. Line 92 defines the name 'refactory_metrics'.
    const TOOLS = [
      {
        name: "refactory_analyze",
        description: "Analyze a source file for decomposition. Returns health score, function count, dependency graph, and recommended split points.",
        inputSchema: {
          type: "object",
          properties: {
            file: { type: "string", description: "Path to the monolith file to analyze" },
            language: { type: "string", description: "Language (js, ts, py). Auto-detected if omitted." },
          },
          required: ["file"],
        },
      },
      {
        name: "refactory_plan",
        description: "Generate a decomposition plan — module boundaries, function assignments, dependency order. Uses AST analysis + LLM reasoning.",
        inputSchema: {
          type: "object",
          properties: {
            file: { type: "string", description: "Path to the monolith file" },
            modules: { type: "number", description: "Target number of modules (auto if omitted)" },
            maxLines: { type: "number", description: "Max lines per module (default: 500)" },
            style: { type: "string", description: "Grouping style: 'functional' | 'domain' | 'layer'" },
          },
          required: ["file"],
        },
      },
      {
        name: "refactory_extract",
        description: "Extract one module from the monolith according to the plan. Routes to the cheapest capable free LLM API.",
        inputSchema: {
          type: "object",
          properties: {
            file: { type: "string", description: "Path to the monolith file" },
            module: { type: "string", description: "Module name to extract (from the plan)" },
            functions: { type: "array", items: { type: "string" }, description: "Function names to include" },
            outputDir: { type: "string", description: "Output directory for extracted module" },
            plan: { type: "string", description: "Path to the decomposition plan JSON" },
          },
          required: ["file", "module"],
        },
      },
      {
        name: "refactory_verify",
        description: "Verify a decomposed module: loads without errors, exports match plan, no circular deps, tests pass.",
        inputSchema: {
          type: "object",
          properties: {
            moduleDir: { type: "string", description: "Directory containing extracted modules" },
            original: { type: "string", description: "Path to the original monolith (for export comparison)" },
            testCmd: { type: "string", description: "Test command to run (e.g., 'npm test')" },
          },
          required: ["moduleDir"],
        },
      },
      {
        name: "refactory_metrics",
        description: "Calculate before/after metrics and the Refactory Score (0-1). Measures health improvement, module quality, test preservation.",
        inputSchema: {
          type: "object",
          properties: {
            original: { type: "string", description: "Path to the original monolith" },
            moduleDir: { type: "string", description: "Directory containing extracted modules" },
            testResults: { type: "string", description: "Path to test results JSON (before/after)" },
          },
          required: ["original", "moduleDir"],
        },
      },
      {
        name: "refactory_report",
        description: "Generate a decomposition report with metrics, dependency graphs, and Refactory Score. Outputs Markdown or HTML.",
        inputSchema: {
          type: "object",
          properties: {
            metricsFile: { type: "string", description: "Path to metrics JSON from refactory_metrics" },
            format: { type: "string", description: "'markdown' (default) or 'html'" },
            outputPath: { type: "string", description: "Where to write the report" },
          },
          required: ["metricsFile"],
        },
      },
      {
        name: "refactory_depmap",
        description: "Map dependencies for a file — who requires it (consumers), what it requires (dependencies), detect circular deps.",
        inputSchema: {
          type: "object",
          properties: {
            file: { type: "string", description: "Path to the file to map" },
            projectDir: { type: "string", description: "Project root directory" },
          },
          required: ["file"],
        },
      },
      {
        name: "refactory_characterize",
        description: "Generate characterization tests and golden export snapshot BEFORE decomposition. Captures behavioral contract.",
        inputSchema: {
          type: "object",
          properties: {
            file: { type: "string", description: "Path to the module to characterize" },
            outputDir: { type: "string", description: "Where to write test + golden files" },
          },
          required: ["file"],
        },
      },
      {
        name: "refactory_verify_exports",
        description: "Compare post-decomposition module against golden export snapshot. Reports missing, added, or type-changed exports.",
        inputSchema: {
          type: "object",
          properties: {
            goldenFile: { type: "string", description: "Path to .golden-exports.json from characterize" },
            newFile: { type: "string", description: "Path to the new re-export module" },
          },
          required: ["goldenFile", "newFile"],
        },
      },
      {
        name: "refactory_fix_imports",
        description: "Mechanically fix broken require() paths after module extraction. No LLM needed — pure path resolution.",
        inputSchema: {
          type: "object",
          properties: {
            moduleDir: { type: "string", description: "Directory containing extracted modules" },
            projectDir: { type: "string", description: "Project root to scan for consumers" },
            dryRun: { type: "boolean", description: "Report changes without writing (default: false)" },
          },
          required: ["moduleDir"],
        },
      },
      {
        name: "refactory_decompose",
        description: "Full decomposition pipeline in one call: analyze, depmap, characterize, plan, extract ALL modules, fix-imports, verify, metrics, re-export, report. The 'just do it' tool.",
        inputSchema: {
          type: "object",
          properties: {
            file: { type: "string", description: "Path to the monolith file to decompose" },
            outputDir: { type: "string", description: "Output directory (default: <dir>/lib/<basename>/ next to source)" },
            maxLines: { type: "number", description: "Max lines per module (default: 500)" },
            projectDir: { type: "string", description: "Project root for dependency mapping (optional)" },
          },
          required: ["file"],
        },
      },
    ];
  • src/server.js:201-201 (registration)
    Switch-case dispatch in the CallToolRequestSchema handler routes 'refactory_metrics' to the metrics() function.
    case "refactory_metrics": result = await metrics(args); break;
  • The decompose pipeline invokes metrics() at line 191 during its full decomposition flow (Step 8), passing original and moduleDir.
    // Step 8: Metrics
    let metricsResult;
    try {
      metricsResult = await metrics({ original: filePath, moduleDir: outputDir });
      result.steps.metrics = metricsResult;
    } catch (err) {
      throw new Error(`Step metrics failed: ${err.message}`);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavior. Does not mention if tool is read-only, modifies data, or requires specific permissions. Lacks details on side effects or operation scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two front-loaded sentences: first states primary action, second lists measured aspects. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Does not describe output format or structure, despite no output schema. Lacks details on what 'before/after metrics' entails or how Refactory Score is presented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are already documented. Description adds minimal context beyond schema, stating it computes metrics but not elaborating on each parameter's role.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it calculates before/after metrics and Refactory Score. Describes measured aspects (health, module quality, test preservation), distinguishing it from sibling tools like refactory_analyze or refactory_characterize.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Does not specify prerequisites or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/codedrop-codes/refactory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server