Skip to main content
Glama
kunwarVivek

mcp-github-project-manager

get_sprint_metrics

Retrieve progress metrics for a specific GitHub sprint, including issue details when requested, to track development workflow performance.

Instructions

Get progress metrics for a specific sprint

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sprintIdYes
includeIssuesYes

Implementation Reference

  • Core implementation of getSprintMetrics: fetches sprint by ID, loads associated issues, calculates completion metrics (total/completed/remaining issues, percentage), determines days remaining and active status, optionally includes issue details.
    async getSprintMetrics(id: string, includeIssues: boolean = false): Promise<SprintMetrics> {
      try {
        const sprint = await this.sprintRepo.findById(id);
        if (!sprint) {
          throw new ResourceNotFoundError(ResourceType.SPRINT, id);
        }
    
        const issuePromises = sprint.issues.map((issueId: string) => this.issueRepo.findById(issueId));
        const issuesResult = await Promise.all(issuePromises);
        const issues = issuesResult.filter((issue: Issue | null) => issue !== null) as Issue[];
    
        const totalIssues = issues.length;
        const completedIssues = issues.filter(
          issue => issue.status === ResourceStatus.CLOSED || issue.status === ResourceStatus.COMPLETED
        ).length;
        const remainingIssues = totalIssues - completedIssues;
        const completionPercentage = totalIssues > 0 ? Math.round((completedIssues / totalIssues) * 100) : 0;
    
        const now = new Date();
        const endDate = new Date(sprint.endDate);
        const daysRemaining = Math.ceil((endDate.getTime() - now.getTime()) / (1000 * 60 * 60 * 24));
        const isActive = now >= new Date(sprint.startDate) && now <= endDate;
    
        return {
          id: sprint.id,
          title: sprint.title,
          startDate: sprint.startDate,
          endDate: sprint.endDate,
          totalIssues,
          completedIssues,
          remainingIssues,
          completionPercentage,
          status: sprint.status,
          issues: includeIssues ? issues : undefined,
          daysRemaining,
          isActive
        };
      } catch (error) {
        throw this.mapErrorToMCPError(error);
      }
    }
  • Defines the ToolDefinition for get_sprint_metrics including name, description, Zod input schema (sprintId: string, includeIssues: boolean), and usage example.
    export const getSprintMetricsTool: ToolDefinition<GetSprintMetricsArgs> = {
      name: "get_sprint_metrics",
      description: "Get progress metrics for a specific sprint",
      schema: getSprintMetricsSchema as unknown as ToolSchema<GetSprintMetricsArgs>,
      examples: [
        {
          name: "Get sprint progress",
          description: "Get progress metrics for sprint 'sprint_1'",
          args: {
            sprintId: "sprint_1",
            includeIssues: true,
          },
        },
      ],
    };
  • Registers the getSprintMetricsTool in the central ToolRegistry singleton during initialization.
    this.registerTool(getSprintMetricsTool);
  • MCP server request handler dispatches get_sprint_metrics calls to ProjectManagementService.getSprintMetrics after validation.
    case "get_sprint_metrics":
      return await this.service.getSprintMetrics(args.sprintId, args.includeIssues);
  • TypeScript interface defining the structure of SprintMetrics returned by the handler.
    export interface SprintMetrics {
      id: string;
      title: string;
      startDate: string;
      endDate: string;
      totalIssues: number;
      completedIssues: number;
      remainingIssues: number;
      completionPercentage: number;
      status: ResourceStatus;
      issues?: Issue[];
      daysRemaining?: number;
      isActive: boolean;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. 'Get progress metrics' implies a read-only operation, but it doesn't disclose behavioral traits such as required permissions, rate limits, what 'progress metrics' includes (e.g., burndown charts, velocity), or whether it's a safe, non-destructive query. This leaves significant gaps for an agent to understand how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a metrics tool with 2 parameters, 0% schema coverage, no annotations, and no output schema, the description is incomplete. It doesn't explain what 'progress metrics' entails, how results are structured, or any prerequisites, leaving the agent with insufficient context to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the input schema provides no descriptions for 'sprintId' or 'includeIssues'. The description adds no parameter semantics beyond implying a 'specific sprint' for 'sprintId'. It doesn't explain what 'includeIssues' does (e.g., whether it adds issue details to metrics) or provide format details, failing to compensate for the low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get progress metrics for a specific sprint' clearly states the verb ('Get') and resource ('progress metrics for a specific sprint'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'get_current_sprint' or 'get_milestone_metrics', which might provide related but different information, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_current_sprint' and 'get_milestone_metrics', there's no indication of whether this tool is for historical sprints, real-time metrics, or specific types of progress data, leaving usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kunwarVivek/mcp-github-project-manager'

If you have feedback or need assistance with the MCP directory API, please join our Discord server