Skip to main content
Glama
Tiberriver256

Azure DevOps MCP Server

pipeline_timeline

Retrieve and filter Azure DevOps pipeline run stages and jobs by state and result to analyze execution progress and outcomes.

Instructions

Retrieve the timeline of stages and jobs for a pipeline run, to reduce the amount of data returned, you can filter by state and result

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdNoThe ID or name of the project (Default: MyProject)
runIdYesRun identifier
timelineIdNoOptional timeline identifier to select a specific timeline record
pipelineIdNoOptional pipeline numeric ID for reference only
stateNoOptional state filter (single value or array) applied to returned timeline records
resultNoOptional result filter (single value or array) applied to returned timeline records

Implementation Reference

  • The core handler function `getPipelineTimeline` that fetches the pipeline timeline from Azure DevOps API, applies optional filters by state and result, and handles errors.
    export async function getPipelineTimeline(
      connection: WebApi,
      options: GetPipelineTimelineOptions,
    ): Promise<PipelineTimeline> {
      try {
        const buildApi = await connection.getBuildApi();
        const projectId = options.projectId ?? defaultProject;
        const { runId, timelineId, state, result } = options;
    
        const route = `${encodeURIComponent(projectId)}/_apis/build/builds/${runId}/timeline`;
        const baseUrl = connection.serverUrl.replace(/\/+$/, '');
        const url = new URL(`${route}`, `${baseUrl}/`);
        url.searchParams.set('api-version', API_VERSION);
        if (timelineId) {
          url.searchParams.set('timelineId', timelineId);
        }
    
        const requestOptions = buildApi.createRequestOptions(
          'application/json',
          API_VERSION,
        );
    
        const response = await buildApi.rest.get<PipelineTimeline | null>(
          url.toString(),
          requestOptions,
        );
    
        if (response.statusCode === 404 || !response.result) {
          throw new AzureDevOpsResourceNotFoundError(
            `Timeline not found for run ${runId} in project ${projectId}`,
          );
        }
    
        const timeline = response.result as PipelineTimeline & {
          records?: TimelineRecord[];
        };
        const stateFilters = normalizeFilter(state);
        const resultFilters = normalizeFilter(result);
    
        if (Array.isArray(timeline.records) && (stateFilters || resultFilters)) {
          const filteredRecords = timeline.records.filter((record) => {
            const recordState = stateToString(record.state);
            const recordResult = resultToString(record.result);
    
            const stateMatch =
              !stateFilters || (recordState && stateFilters.has(recordState));
            const resultMatch =
              !resultFilters || (recordResult && resultFilters.has(recordResult));
    
            return stateMatch && resultMatch;
          });
    
          return {
            ...timeline,
            records: filteredRecords,
          } as PipelineTimeline;
        }
    
        return timeline;
      } catch (error) {
        if (error instanceof AzureDevOpsError) {
          throw error;
        }
    
        if (error instanceof Error) {
          const message = error.message.toLowerCase();
          if (
            message.includes('authentication') ||
            message.includes('unauthorized') ||
            message.includes('401')
          ) {
            throw new AzureDevOpsAuthenticationError(
              `Failed to authenticate: ${error.message}`,
            );
          }
    
          if (
            message.includes('not found') ||
            message.includes('does not exist') ||
            message.includes('404')
          ) {
            throw new AzureDevOpsResourceNotFoundError(
              `Pipeline timeline or project not found: ${error.message}`,
            );
          }
        }
    
        throw new AzureDevOpsError(
          `Failed to retrieve pipeline timeline: ${
            error instanceof Error ? error.message : String(error)
          }`,
        );
      }
    }
  • Tool registration definition in the pipelinesTools array, specifying name, description, input schema, and MCP enabled flag.
    {
      name: 'pipeline_timeline',
      description:
        'Retrieve the timeline of stages and jobs for a pipeline run, to reduce the amount of data returned, you can filter by state and result',
      inputSchema: zodToJsonSchema(GetPipelineTimelineSchema),
      mcp_enabled: true,
    },
  • Dispatcher case in handlePipelinesRequest that parses arguments, calls the handler, and formats the response for the pipeline_timeline tool.
    case 'pipeline_timeline': {
      const args = GetPipelineTimelineSchema.parse(request.params.arguments);
      const result = await getPipelineTimeline(connection, {
        ...args,
        projectId: args.projectId ?? defaultProject,
      });
      return {
        content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
      };
    }
  • Helper function to normalize filter values (state or result) into a Set for efficient matching.
    function normalizeFilter(value?: string | string[]): Set<string> | undefined {
      if (!value) {
        return undefined;
      }
    
      const values = Array.isArray(value) ? value : [value];
      const normalized = values
        .map((item) => (typeof item === 'string' ? item.trim().toLowerCase() : ''))
        .filter((item) => item.length > 0);
    
      return normalized.length > 0 ? new Set(normalized) : undefined;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions filtering to reduce data, which hints at performance considerations, but doesn't disclose key behavioral traits such as whether this is a read-only operation, potential rate limits, authentication needs, error conditions, or what the return format looks like (e.g., structured timeline data). This is inadequate for a tool with multiple parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and includes a practical tip about filtering. There's no wasted verbiage, though it could be slightly more structured by separating purpose from usage guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain the return values, error handling, or behavioral nuances like pagination or data format. For a tool that retrieves timeline data with filtering options, more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds marginal value by mentioning the purpose of filtering ('to reduce the amount of data returned') and naming 'state' and 'result' as filterable fields, but doesn't provide additional syntax or format details beyond what the schema provides. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'retrieve' and the resource 'timeline of stages and jobs for a pipeline run', which is specific and actionable. However, it doesn't explicitly differentiate from sibling tools like 'get_pipeline_run' or 'get_pipeline_log', which might also retrieve pipeline-related data, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning filtering options 'to reduce the amount of data returned', which suggests when to use filters, but it doesn't provide explicit guidance on when to choose this tool over alternatives like 'get_pipeline_run' or 'list_pipeline_runs'. No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Tiberriver256/mcp-server-azure-devops'

If you have feedback or need assistance with the MCP directory API, please join our Discord server