Skip to main content
Glama
jghidalgo

Lambda Performance MCP Server

by jghidalgo

analyze_lambda_performance

Analyze AWS Lambda function performance metrics including cold starts, duration, and errors to identify optimization opportunities and reduce costs.

Instructions

Analyze Lambda function performance metrics including cold starts, duration, and errors

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
functionNameYesName of the Lambda function to analyze
timeRangeNoTime range for analysis (default: 24h)
includeDetailsNoInclude detailed metrics and logs (default: true)

Implementation Reference

  • The main handler function that processes the analyze_lambda_performance tool request, delegates to LambdaAnalyzer for analysis, and formats the results into an MCP text response.
    async analyzeLambdaPerformance(args) {
      const { functionName, timeRange = '24h', includeDetails = true } = args;
      
      const analysis = await this.lambdaAnalyzer.analyzeFunction(
        functionName, 
        timeRange, 
        includeDetails
      );
    
      return {
        content: [
          {
            type: 'text',
            text: `# Lambda Performance Analysis: ${functionName}\n\n` +
                  `## Summary\n` +
                  `- **Total Invocations**: ${analysis.totalInvocations.toLocaleString()}\n` +
                  `- **Average Duration**: ${analysis.avgDuration}ms\n` +
                  `- **Cold Start Rate**: ${analysis.coldStartRate}%\n` +
                  `- **Error Rate**: ${analysis.errorRate}%\n` +
                  `- **Memory Utilization**: ${analysis.memoryUtilization}%\n\n` +
                  `## Performance Metrics\n` +
                  `- **P50 Duration**: ${analysis.p50Duration}ms\n` +
                  `- **P95 Duration**: ${analysis.p95Duration}ms\n` +
                  `- **P99 Duration**: ${analysis.p99Duration}ms\n` +
                  `- **Max Duration**: ${analysis.maxDuration}ms\n\n` +
                  `## Cold Start Analysis\n` +
                  `- **Total Cold Starts**: ${analysis.coldStarts.total}\n` +
                  `- **Average Cold Start Duration**: ${analysis.coldStarts.avgDuration}ms\n` +
                  `- **Cold Start Pattern**: ${analysis.coldStarts.pattern}\n\n` +
                  `${includeDetails ? this.formatDetailedMetrics(analysis.details) : ''}`
          }
        ]
      };
    }
  • Core helper method that performs the actual AWS API calls to Lambda, CloudWatch metrics, and logs to compute comprehensive performance analysis.
    async analyzeFunction(functionName, timeRange, includeDetails) {
      const timeRangeMs = this.parseTimeRange(timeRange);
      const endTime = new Date();
      const startTime = new Date(endTime.getTime() - timeRangeMs);
    
      // Get function configuration
      const functionConfig = await this.getFunctionConfig(functionName);
      
      // Get CloudWatch metrics
      const metrics = await this.getMetrics(functionName, startTime, endTime);
      
      // Analyze cold starts from logs
      const coldStartAnalysis = await this.analyzeColdStartsFromLogs(functionName, startTime, endTime);
      
      // Calculate performance statistics
      const analysis = {
        functionName,
        timeRange,
        totalInvocations: metrics.invocations || 0,
        avgDuration: metrics.avgDuration || 0,
        p50Duration: metrics.p50Duration || 0,
        p95Duration: metrics.p95Duration || 0,
        p99Duration: metrics.p99Duration || 0,
        maxDuration: metrics.maxDuration || 0,
        errorRate: this.calculateErrorRate(metrics.errors, metrics.invocations),
        coldStartRate: this.calculateColdStartRate(coldStartAnalysis.total, metrics.invocations),
        memoryUtilization: await this.calculateMemoryUtilization(functionName, startTime, endTime),
        coldStarts: {
          total: coldStartAnalysis.total,
          avgDuration: coldStartAnalysis.avgDuration,
          pattern: coldStartAnalysis.pattern
        },
        config: functionConfig
      };
    
      if (includeDetails) {
        analysis.details = {
          errors: await this.analyzeErrors(functionName, startTime, endTime),
          trends: await this.analyzeTrends(functionName, startTime, endTime),
          memoryUsage: await this.getMemoryUsageDetails(functionName, startTime, endTime)
        };
      }
    
      return analysis;
    }
  • Input schema defining the parameters for the analyze_lambda_performance tool: functionName (required), timeRange, includeDetails.
    inputSchema: {
      type: 'object',
      properties: {
        functionName: {
          type: 'string',
          description: 'Name of the Lambda function to analyze'
        },
        timeRange: {
          type: 'string',
          enum: ['1h', '6h', '24h', '7d', '30d'],
          description: 'Time range for analysis (default: 24h)'
        },
        includeDetails: {
          type: 'boolean',
          description: 'Include detailed metrics and logs (default: true)'
        }
      },
      required: ['functionName']
    }
  • index.js:41-63 (registration)
    Tool registration in the ListTools response, including name, description, and input schema.
    {
      name: 'analyze_lambda_performance',
      description: 'Analyze Lambda function performance metrics including cold starts, duration, and errors',
      inputSchema: {
        type: 'object',
        properties: {
          functionName: {
            type: 'string',
            description: 'Name of the Lambda function to analyze'
          },
          timeRange: {
            type: 'string',
            enum: ['1h', '6h', '24h', '7d', '30d'],
            description: 'Time range for analysis (default: 24h)'
          },
          includeDetails: {
            type: 'boolean',
            description: 'Include detailed metrics and logs (default: true)'
          }
        },
        required: ['functionName']
      }
    },
  • index.js:211-212 (registration)
    Dispatch case in the CallToolRequestSchema handler that routes requests for this tool to the handler method.
    case 'analyze_lambda_performance':
      return await this.analyzeLambdaPerformance(args);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions what metrics are analyzed but doesn't disclose behavioral traits like whether this is a read-only operation, if it requires specific permissions, rate limits, or what the output format looks like. For a performance analysis tool with no annotations, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place by specifying the resource and key metrics without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a performance analysis tool. It doesn't explain what the analysis returns (e.g., aggregated metrics, time-series data, recommendations) or behavioral aspects like permissions or rate limits. With 3 parameters and multiple similar siblings, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain the significance of the timeRange options or what 'includeDetails' entails). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Analyze') and resource ('Lambda function performance metrics') with specific metrics listed (cold starts, duration, errors). It distinguishes from some siblings like 'list_lambda_functions' or 'get_cost_analysis', but doesn't explicitly differentiate from similar tools like 'compare_lambda_performance' or 'monitor_real_time_performance'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With siblings like 'compare_lambda_performance', 'monitor_real_time_performance', and 'track_cold_starts', the description doesn't indicate whether this is for historical analysis, real-time monitoring, comparative analysis, or specialized cold start tracking.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jghidalgo/lambda-performance-mcp-nodejs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server