Skip to main content
Glama
codedrop-codes

Refactory

refactory_analyze

Analyzes source files for decomposition, providing a health score, function count, dependency graph, and recommended split points to refactor monoliths into modules.

Instructions

Analyze a source file for decomposition. Returns health score, function count, dependency graph, and recommended split points.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fileYesPath to the monolith file to analyze
languageNoLanguage (js, ts, py). Auto-detected if omitted.

Implementation Reference

  • The main handler function `analyze()` that executes the refactory_analyze tool logic. It reads the source file, extracts functions and dependencies (via AST or regex), enriches them with metrics, computes a health score, and returns the full analysis result.
    async function analyze(args) {
      const filePath = path.resolve(args.file);
      if (!fs.existsSync(filePath)) throw new Error(`File not found: ${filePath}`);
      const source = fs.readFileSync(filePath, "utf8");
      const lines = source.split("\n");
      const totalLines = lines.length;
      const deep = !!args.deep;
      const projectDir = args.projectDir ? path.resolve(args.projectDir) : path.dirname(filePath);
    
      let rawFns, rawDeps;
      if (sgParse && sgLang) {
        const root = sgParse(sgLang.JavaScript, source).root();
        rawFns = astExtractFunctions(root);
        rawDeps = astExtractRequires(root);
        // Merge ES imports
        const imps = regexExtractDeps(lines).filter(d => !rawDeps.some(r => r.module === d.module && r.line === d.line));
        rawDeps.push(...imps);
      } else {
        rawFns = regexExtractFunctions(lines);
        rawDeps = regexExtractDeps(lines);
      }
    
      const functions = enrichFunctions(rawFns, lines);
      const dependencies = deep ? buildDependencyMap(rawDeps, functions, lines) : rawDeps.map(d => ({
        module: d.module, line: d.line, isLocal: d.module.startsWith("."), isNpm: !d.module.startsWith(".")
      }));
      const health = calcHealth(totalLines, functions, rawDeps);
      const businessLogicFlags = deep ? detectBusinessLogic(lines) : [];
      const consumers = deep ? findConsumers(filePath, projectDir) : [];
    
      let recommendation = "ok";
      if (totalLines > 1000 || health.overall < 0.5) recommendation = "decompose";
      else if (totalLines > 500 || health.overall < 0.7) recommendation = "consider_decompose";
    
      return {
        file: filePath, lines: totalLines, analysisMode: sgParse ? "ast" : "regex", deep,
        functions: functions.length, functionList: functions,
        requires: rawDeps.length, requireList: rawDeps,
        internalRequires: rawDeps.filter(r => r.module.startsWith(".")).map(r => r.module),
        externalRequires: rawDeps.filter(r => !r.module.startsWith(".")).map(r => r.module),
        dependencies, health, recommendation,
        ...(deep ? {
          businessLogicFlags, consumers,
          riskSummary: {
            high: functions.filter(f => f.risk === "high").map(f => f.name),
            medium: functions.filter(f => f.risk === "medium").map(f => f.name),
            low: functions.filter(f => f.risk === "low").map(f => f.name),
          },
        } : {}),
      };
    }
  • Input schema definition for the refactory_analyze tool: takes 'file' (required) and 'language' (optional) parameters.
    inputSchema: {
      type: "object",
      properties: {
        file: { type: "string", description: "Path to the monolith file to analyze" },
        language: { type: "string", description: "Language (js, ts, py). Auto-detected if omitted." },
      },
      required: ["file"],
    },
  • src/server.js:36-48 (registration)
    Registration of the refactory_analyze tool in the TOOLS array, listing its name, description, and inputSchema.
    const TOOLS = [
      {
        name: "refactory_analyze",
        description: "Analyze a source file for decomposition. Returns health score, function count, dependency graph, and recommended split points.",
        inputSchema: {
          type: "object",
          properties: {
            file: { type: "string", description: "Path to the monolith file to analyze" },
            language: { type: "string", description: "Language (js, ts, py). Auto-detected if omitted." },
          },
          required: ["file"],
        },
      },
  • src/server.js:197-197 (registration)
    The case dispatch that routes 'refactory_analyze' tool calls to the `analyze()` handler function.
    case "refactory_analyze": result = await analyze(args); break;
  • The calcHealth function computes the health score (0-1) used inside analyze(), based on lines, function count, function size, coupling, and complexity.
    function calcHealth(totalLines, fns, deps) {
      const maxFnLen = fns.length ? Math.max(...fns.map(f => f.estLines || (f.endLine - f.startLine + 1))) : 0;
      const maxCx = fns.length ? Math.max(...fns.map(f => f.complexity || 1)) : 1;
      const ls = tier(totalLines, [[300,1],[500,.9],[1000,.7],[2000,.4]]) || .2;
      const fc = tier(fns.length, [[10,1],[20,.8],[30,.6],[50,.4]]) || .2;
      const fs_ = tier(maxFnLen, [[50,1],[100,.8],[200,.6],[500,.4]]) || .2;
      const cs = tier(deps.length, [[5,1],[10,.8],[20,.6]]) || .4;
      const cx = tier(maxCx, [[5,1],[10,.8],[20,.6]]) || .4;
      const o100 = fns.filter(f => (f.estLines || (f.endLine - f.startLine + 1)) > 100).length;
      const o300 = fns.filter(f => (f.estLines || (f.endLine - f.startLine + 1)) > 300).length;
      const penalty = o300 > 0 ? 0.3 : o100 > 3 ? 0.2 : 0;
      const overall = Math.max(0, Math.min(1, (ls*.25 + fc*.2 + fs_*.2 + cs*.15 + cx*.2) - penalty));
      return {
        overall: Math.round(overall * 100) / 100,
        linesScore: ls, fnCountScore: fc, fnSizeScore: fs_, couplingScore: cs, complexityScore: cx,
        maxFunctionLines: maxFnLen, maxComplexity: maxCx, functionsOver100Lines: o100, functionsOver300Lines: o300,
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It fails to disclose behavioral traits like whether the tool modifies files (likely read-only) or requires specific permissions. It only lists return values, not side effects or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It front-loads the action ('Analyze a source file for decomposition') and lists outputs. However, it could be better structured (e.g., bullet points) for clarity when multiple outputs are mentioned.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (returns multiple structured items) and lack of output schema, the description is somewhat incomplete. It lists outputs but does not explain what 'health score' or 'recommended split points' mean. It meets minimal viability but leaves gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers both parameters with descriptions (100% coverage). The description adds value by noting that language is auto-detected if omitted, which is not in the schema. This helps the agent understand optional parameter behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes a source file for decomposition and lists the specific outputs (health score, function count, etc.). It distinguishes from siblings like refactory_decompose (which likely performs the decomposition) and refactory_depmap (which focuses on dependency maps). However, it could be more explicit about how it differs from refactory_characterize.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, context, or exclusions. For an analysis tool among many siblings, usage guidelines are necessary but missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/codedrop-codes/refactory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server