Skip to main content
Glama

analyze_local_project

Analyze local project directories to detect stateful code patterns and receive guidance for migrating to stateless architectures in .NET and Java applications.

Instructions

Analyze a local project directory for stateful code patterns

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectPathYesAbsolute path to project directory

Implementation Reference

  • The execute function implements the core logic: validates project path, zips the directory using projectZipper, calls the Statelessor API via apiClient, formats the result, handles errors, and cleans up the temporary ZIP file.
    async execute(args) {
      let zipFilePath = null;
    
      try {
        const { projectPath } = args;
    
        // Validate project path
        const stats = await fs.stat(projectPath);
        if (!stats.isDirectory()) {
          throw new Error('Project path must be a directory');
        }
    
        // ZIP the project
        zipFilePath = await projectZipper.zipProject(projectPath);
    
        // Call Statelessor API
        const result = await apiClient.analyzeLocalProject(zipFilePath);
    
        // Format result for Amazon Q
        return {
          content: [
            {
              type: 'text',
              text: resultFormatter.formatAnalysisResult(result),
            },
          ],
        };
      } catch (error) {
        return {
          content: [
            {
              type: 'text',
              text: `Error analyzing local project: ${error.message}`,
            },
          ],
          isError: true,
        };
      } finally {
        // Cleanup ZIP file
        if (zipFilePath) {
          try {
            await fs.unlink(zipFilePath);
          } catch (err) {
            console.error('Failed to cleanup ZIP file:', err);
          }
        }
      }
    },
  • The tool definition object, including name, description, and inputSchema specifying the required 'projectPath' parameter.
    definition: {
      name: 'analyze_local_project',
      description: 'Analyze a local project directory for stateful code patterns',
      inputSchema: {
        type: 'object',
        properties: {
          projectPath: {
            type: 'string',
            description: 'Absolute path to project directory',
          },
        },
        required: ['projectPath'],
      },
    },
  • mcp-server.js:57-58 (registration)
    Registration in the MCP server switch statement: dispatches tool calls to analyzeLocalTool.execute.
    case 'analyze_local_project':
      return await analyzeLocalTool.execute(args);
  • mcp-server.js:42-42 (registration)
    Tool definition registered in the ListToolsRequestHandler response.
    analyzeLocalTool.definition,
  • API client method that sends the ZIP file to the Statelessor /analyze endpoint for processing.
    async analyzeLocalProject(zipFilePath) {
      const requestId = this.generateRequestId();
      
      try {
        const formData = new FormData();
        formData.append('type', 'zip');
        formData.append('zipFile', fs.createReadStream(zipFilePath));  // Changed from 'file' to 'zipFile'
    
        const response = await this.client.post('/analyze', formData, {
          headers: {
            'X-Request-ID': requestId,
            ...formData.getHeaders(),
          },
        });
    
        return response.data;
      } catch (error) {
        throw this.handleError(error, 'analyzeLocalProject');
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the analysis purpose but doesn't describe what the tool actually does behaviorally—e.g., whether it scans files, runs static analysis, returns a report, has side effects, requires specific permissions, or handles errors. For a tool with zero annotation coverage, this leaves significant gaps in understanding its operation and safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded with the core action and target, making it easy to parse. Every part of the sentence contributes meaning, earning its place with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (analysis of code patterns), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what 'analyze' entails behaviorally, what 'stateful code patterns' means, or what the output looks like (e.g., a report, list of findings). For a tool with no structured data to clarify these aspects, the description should provide more context to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'projectPath' documented as 'Absolute path to project directory'. The description adds no additional parameter semantics beyond implying analysis of a 'local project directory', which aligns with the schema. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('analyze') and target ('local project directory'), with the specific purpose of detecting 'stateful code patterns'. It distinguishes from siblings like 'analyze_git_repository' by specifying 'local' vs. 'git', but doesn't fully differentiate from 'get_project_findings' which might retrieve results. The purpose is specific but sibling differentiation could be more explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'analyze_git_repository' for remote repositories or 'get_project_findings' for retrieving results. It implies usage for local projects but lacks explicit when/when-not instructions or prerequisites, leaving the agent to infer context without clear direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aloksinghGIT/statelessor-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server