Skip to main content
Glama
cuongdev

AWS CodePipeline MCP Server

by cuongdev

get_pipeline_metrics

Retrieve performance metrics for AWS CodePipeline to monitor execution times, success rates, and pipeline health over specified time periods.

Instructions

Get performance metrics for a pipeline

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pipelineNameYesName of the pipeline
periodNoTime period in seconds for the metrics (default: 86400 - 1 day)
startTimeNoStart time for metrics in ISO format (default: 1 week ago)
endTimeNoEnd time for metrics in ISO format (default: now)

Implementation Reference

  • The core handler function that implements the 'get_pipeline_metrics' tool logic. It retrieves performance metrics including success/failure rates, execution times from CloudWatch, and detailed stage durations from recent pipeline executions using the CodePipeline API.
    export async function getPipelineMetrics(
      codePipelineManager: CodePipelineManager, 
      input: {
        pipelineName: string;
        period?: number;
        startTime?: string;
        endTime?: string;
      }
    ) {
      const { pipelineName, period = 86400 } = input;
      
      // Set default time range if not provided
      const endTime = input.endTime ? new Date(input.endTime) : new Date();
      const startTime = input.startTime ? new Date(input.startTime) : new Date(endTime.getTime() - 7 * 24 * 60 * 60 * 1000); // 1 week ago
      
      // Create CloudWatch client
      const cloudwatch = new AWS.CloudWatch({
        region: codePipelineManager.getCodePipeline().config.region
      });
      
      // Get execution success/failure metrics
      const successMetric = await cloudwatch.getMetricStatistics({
        Namespace: 'AWS/CodePipeline',
        MetricName: 'SucceededPipeline',
        Dimensions: [{ Name: 'PipelineName', Value: pipelineName }],
        StartTime: startTime,
        EndTime: endTime,
        Period: period,
        Statistics: ['Sum', 'Average', 'Maximum']
      }).promise();
      
      const failedMetric = await cloudwatch.getMetricStatistics({
        Namespace: 'AWS/CodePipeline',
        MetricName: 'FailedPipeline',
        Dimensions: [{ Name: 'PipelineName', Value: pipelineName }],
        StartTime: startTime,
        EndTime: endTime,
        Period: period,
        Statistics: ['Sum', 'Average', 'Maximum']
      }).promise();
      
      // Get execution time metrics
      const executionTimeMetric = await cloudwatch.getMetricStatistics({
        Namespace: 'AWS/CodePipeline',
        MetricName: 'PipelineExecutionTime',
        Dimensions: [{ Name: 'PipelineName', Value: pipelineName }],
        StartTime: startTime,
        EndTime: endTime,
        Period: period,
        Statistics: ['Average', 'Minimum', 'Maximum']
      }).promise();
      
      // Calculate success rate
      const totalSuccessful = successMetric.Datapoints?.reduce((sum, point) => sum + (point.Sum || 0), 0) || 0;
      const totalFailed = failedMetric.Datapoints?.reduce((sum, point) => sum + (point.Sum || 0), 0) || 0;
      const totalExecutions = totalSuccessful + totalFailed;
      const successRate = totalExecutions > 0 ? (totalSuccessful / totalExecutions) * 100 : 0;
      
      // Format execution time data
      const executionTimeData = executionTimeMetric.Datapoints?.map(point => ({
        timestamp: point.Timestamp?.toISOString(),
        average: point.Average,
        minimum: point.Minimum,
        maximum: point.Maximum
      })) || [];
      
      // Get pipeline executions for the period
      const codepipeline = codePipelineManager.getCodePipeline();
      const pipelineExecutions = await codepipeline.listPipelineExecutions({
        pipelineName,
        maxResults: 20 // Limit to recent executions
      }).promise();
      
      // Calculate average stage duration
      const stageMetrics: Record<string, { count: number, totalDuration: number }> = {};
      
      // We would need to fetch each execution detail to get accurate stage timing
      // This is a simplified version using the available data
      for (const execution of pipelineExecutions.pipelineExecutionSummaries || []) {
        if (execution.startTime && execution.status === 'Succeeded') {
          const executionDetail = await codepipeline.getPipelineExecution({
            pipelineName,
            pipelineExecutionId: execution.pipelineExecutionId || ''
          }).promise();
          
          // Get pipeline state to analyze stage timing
          const pipelineState = await codepipeline.getPipelineState({
            name: pipelineName
          }).promise();
          
          // Process stage information
          for (const stage of pipelineState.stageStates || []) {
            if (stage.latestExecution?.status === 'Succeeded' && 
                stage.stageName && 
                stage.actionStates && 
                stage.actionStates.length > 0) {
              
              // Find earliest and latest action timestamps
              let earliestTime: Date | undefined;
              let latestTime: Date | undefined;
              
              for (const action of stage.actionStates) {
                if (action.latestExecution?.lastStatusChange) {
                  const timestamp = new Date(action.latestExecution.lastStatusChange);
                  
                  if (!earliestTime || timestamp < earliestTime) {
                    earliestTime = timestamp;
                  }
                  
                  if (!latestTime || timestamp > latestTime) {
                    latestTime = timestamp;
                  }
                }
              }
              
              // Calculate duration if we have both timestamps
              if (earliestTime && latestTime) {
                const stageName = stage.stageName;
                const duration = (latestTime.getTime() - earliestTime.getTime()) / 1000; // in seconds
                
                if (!stageMetrics[stageName]) {
                  stageMetrics[stageName] = { count: 0, totalDuration: 0 };
                }
                
                stageMetrics[stageName].count += 1;
                stageMetrics[stageName].totalDuration += duration;
              }
            }
          }
        }
      }
      
      // Calculate average duration for each stage
      const stageDurations = Object.entries(stageMetrics).map(([stageName, metrics]) => ({
        stageName,
        averageDuration: metrics.count > 0 ? metrics.totalDuration / metrics.count : 0,
        executionCount: metrics.count
      }));
      
      // Prepare the metrics result
      const metrics = {
        pipelineName,
        timeRange: {
          startTime: startTime.toISOString(),
          endTime: endTime.toISOString(),
          periodSeconds: period
        },
        executionStats: {
          totalExecutions,
          successfulExecutions: totalSuccessful,
          failedExecutions: totalFailed,
          successRate: successRate.toFixed(2) + '%'
        },
        executionTime: {
          average: executionTimeMetric.Datapoints?.length ? 
            executionTimeMetric.Datapoints.reduce((sum, point) => sum + (point.Average || 0), 0) / executionTimeMetric.Datapoints.length : 
            0,
          minimum: Math.min(...(executionTimeMetric.Datapoints?.map(point => point.Minimum || 0) || [0])),
          maximum: Math.max(...(executionTimeMetric.Datapoints?.map(point => point.Maximum || 0) || [0])),
          dataPoints: executionTimeData
        },
        stagePerformance: stageDurations
      };
    
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(metrics, null, 2),
          },
        ],
      };
    }
  • Defines the input/output schema for the 'get_pipeline_metrics' tool, including required pipelineName and optional time parameters.
    export const getPipelineMetricsSchema = {
      name: "get_pipeline_metrics",
      description: "Get performance metrics for a pipeline",
      inputSchema: {
        type: "object",
        properties: {
          pipelineName: { 
            type: "string",
            description: "Name of the pipeline"
          },
          period: {
            type: "number",
            description: "Time period in seconds for the metrics (default: 86400 - 1 day)",
            default: 86400
          },
          startTime: {
            type: "string",
            description: "Start time for metrics in ISO format (default: 1 week ago)",
            format: "date-time"
          },
          endTime: {
            type: "string",
            description: "End time for metrics in ISO format (default: now)",
            format: "date-time"
          }
        },
        required: ["pipelineName"],
      },
    } as const;
  • src/index.ts:219-226 (registration)
    Registers and dispatches the 'get_pipeline_metrics' tool handler within the main tool call switch statement in the MCP server.
    case "get_pipeline_metrics": {
      return await getPipelineMetrics(codePipelineManager, input as {
        pipelineName: string;
        period?: number;
        startTime?: string;
        endTime?: string;
      });
    }
  • src/index.ts:110-128 (registration)
    Registers the tool schema in the list of available tools returned by the MCP server's ListTools handler.
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return {
        tools: [
          listPipelinesSchema,
          getPipelineStateSchema,
          listPipelineExecutionsSchema,
          approveActionSchema,
          retryStageSchema,
          triggerPipelineSchema,
          getPipelineExecutionLogsSchema,
          stopPipelineExecutionSchema,
          // Add new tool schemas
          getPipelineDetailsSchema,
          tagPipelineResourceSchema,
          createPipelineWebhookSchema,
          getPipelineMetricsSchema,
        ],
      };
    });
  • src/index.ts:66-68 (registration)
    Imports the handler function and schema from the dedicated tool module.
      getPipelineMetrics,
      getPipelineMetricsSchema
    } from "./tools/get_pipeline_metrics.js";
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Get performance metrics', implying a read-only operation, but does not specify if this requires authentication, has rate limits, returns real-time or historical data, or what format the metrics are in. This leaves significant gaps in understanding the tool's behavior beyond basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any unnecessary words. It is front-loaded with the core action and resource, making it easy to parse quickly, which is ideal for conciseness in tool descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a metrics tool with 4 parameters and no output schema or annotations, the description is incomplete. It does not explain what 'performance metrics' entail (e.g., throughput, latency), how results are structured, or any behavioral traits like data freshness or access controls, leaving the agent with insufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear parameter details like defaults for 'period', 'startTime', and 'endTime'. The description adds no additional parameter semantics beyond what the schema provides, such as explaining what 'performance metrics' include or how parameters interact. This meets the baseline for high schema coverage but does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'performance metrics for a pipeline', making the purpose specific and understandable. However, it does not distinguish this tool from potential siblings like 'get_pipeline_details' or 'get_pipeline_state', which might also retrieve pipeline-related data, leaving some ambiguity in differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'get_pipeline_details' and 'get_pipeline_state', it does not specify if this is for performance data only, nor does it mention prerequisites or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cuongdev/mcp-codepipeline-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server