Skip to main content
Glama
kunwarVivek

mcp-github-project-manager

get_next_task

Recommends the next task to work on using AI analysis of priorities, dependencies, team capacity, and project state for GitHub project management.

Instructions

Get AI-powered recommendations for the next task to work on based on priorities, dependencies, team capacity, and current project state

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdNo
featureIdNo
assigneeNo
teamSkillsNo
sprintCapacityNo
currentPhaseNo
excludeBlockedYes
maxComplexityNo
includeAnalysisYes
limitYes

Implementation Reference

  • Main handler function that executes the get_next_task tool logic, including mock task generation, filtering, prioritization, analysis, and response formatting.
    async function executeGetNextTask(args: GetNextTaskArgs): Promise<MCPResponse> {
      const taskService = new TaskGenerationService();
      
      try {
        // For now, create mock tasks for demonstration
        // In a full implementation, this would integrate with ResourceManager
        const mockTasks = [
          {
            id: 'task-1',
            title: 'Set up project infrastructure',
            description: 'Initialize project structure, CI/CD, and development environment',
            priority: 'high',
            complexity: 4,
            estimatedHours: 8,
            status: 'pending',
            dependencies: [],
            tags: ['setup', 'infrastructure']
          },
          {
            id: 'task-2', 
            title: 'Implement user authentication',
            description: 'Create login, registration, and password reset functionality',
            priority: 'critical',
            complexity: 6,
            estimatedHours: 16,
            status: 'pending',
            dependencies: ['task-1'],
            tags: ['auth', 'security']
          },
          {
            id: 'task-3',
            title: 'Design database schema',
            description: 'Create database tables and relationships for core entities',
            priority: 'high',
            complexity: 5,
            estimatedHours: 12,
            status: 'pending',
            dependencies: ['task-1'],
            tags: ['database', 'design']
          }
        ];
    
        // Apply filters
        let filteredTasks = mockTasks;
        
        if (args.maxComplexity) {
          filteredTasks = filteredTasks.filter(task => task.complexity <= args.maxComplexity!);
        }
        
        if (args.assignee) {
          // Would filter by assignee in real implementation
        }
    
        // Get recommendations (simplified)
        const recommendations = filteredTasks
          .sort((a, b) => {
            // Sort by priority first, then complexity
            const priorityOrder = { critical: 4, high: 3, medium: 2, low: 1 };
            const priorityDiff = (priorityOrder[b.priority as keyof typeof priorityOrder] || 0) - 
                               (priorityOrder[a.priority as keyof typeof priorityOrder] || 0);
            if (priorityDiff !== 0) return priorityDiff;
            return a.complexity - b.complexity; // Prefer lower complexity
          })
          .slice(0, args.limit);
    
        // Calculate sprint fit
        const totalHours = recommendations.reduce((sum, task) => sum + task.estimatedHours, 0);
        const sprintCapacity = args.sprintCapacity || 40;
        const sprintFit = totalHours <= sprintCapacity;
    
        // Generate AI analysis
        const analysis = args.includeAnalysis ? generateTaskAnalysis(recommendations, args) : null;
    
        // Format response
        const summary = formatNextTaskRecommendations(recommendations, analysis, {
          totalHours,
          sprintCapacity,
          sprintFit,
          filtersApplied: getAppliedFilters(args)
        });
        
        return ToolResultFormatter.formatSuccess('get_next_task', {
          summary,
          recommendations,
          analysis,
          metrics: {
            totalTasks: recommendations.length,
            totalHours,
            sprintCapacity,
            sprintFit
          }
        });
    
      } catch (error) {
        process.stderr.write(`Error in get_next_task tool: ${error}\n`);
        return ToolResultFormatter.formatSuccess('get_next_task', {
          error: `Failed to get task recommendations: ${error instanceof Error ? error.message : 'Unknown error'}`,
          success: false
        });
      }
    }
  • Zod schema defining input parameters for the get_next_task tool.
    const getNextTaskSchema = z.object({
      projectId: z.string().optional().describe('Filter tasks by specific project ID'),
      featureId: z.string().optional().describe('Filter tasks by specific feature ID'),
      assignee: z.string().optional().describe('Filter tasks for specific team member'),
      teamSkills: z.array(z.string()).optional().describe('Team skills to match against task requirements'),
      sprintCapacity: z.number().optional().describe('Available hours in current sprint (default: 40)'),
      currentPhase: z.enum(['planning', 'development', 'testing', 'review', 'deployment']).optional()
        .describe('Focus on tasks in specific phase'),
      excludeBlocked: z.boolean().default(true).describe('Whether to exclude blocked tasks'),
      maxComplexity: z.number().min(1).max(10).optional().describe('Maximum task complexity to consider'),
      includeAnalysis: z.boolean().default(true).describe('Whether to include detailed AI analysis'),
      limit: z.number().min(1).max(20).default(5).describe('Maximum number of tasks to recommend')
    });
  • Registration of the get_next_task tool in the central ToolRegistry.
    this.registerTool(getNextTaskTool);
  • src/index.ts:450-451 (registration)
    Dispatch handler in main server that calls executeGetNextTask for get_next_task tool invocations.
    case "get_next_task":
      return await executeGetNextTask(args);
  • Helper function to format the next task recommendations into a comprehensive markdown report.
    function formatNextTaskRecommendations(
      tasks: any[], 
      analysis: string | null, 
      metrics: any
    ): string {
      const sections = [
        '# Next Task Recommendations',
        '',
        '## Overview',
        `**Recommended Tasks:** ${tasks.length}`,
        `**Total Effort:** ${metrics.totalHours} hours`,
        `**Sprint Capacity:** ${metrics.sprintCapacity} hours`,
        `**Sprint Fit:** ${metrics.sprintFit ? '✅ Fits in sprint' : '⚠️ Exceeds capacity'}`,
        ''
      ];
    
      // Applied filters
      if (metrics.filtersApplied.length > 0) {
        sections.push(
          '**Applied Filters:**',
          ...metrics.filtersApplied.map((filter: string) => `- ${filter}`),
          ''
        );
      }
    
      // AI Analysis
      if (analysis) {
        sections.push(
          '## AI Analysis',
          analysis,
          ''
        );
      }
    
      // Task recommendations
      if (tasks.length === 0) {
        sections.push(
          '## No Tasks Available',
          'No tasks match your criteria or all tasks are completed/blocked.',
          '',
          '**Suggestions:**',
          '- Remove some filters to see more tasks',
          '- Check if there are blocked tasks that need attention',
          '- Consider adding new features with `add_feature`'
        );
      } else {
        sections.push('## Recommended Tasks');
    
        tasks.forEach((task, index) => {
          sections.push(
            `### ${index + 1}. ${task.title}`,
            `**Priority:** ${task.priority} | **Complexity:** ${task.complexity}/10 | **Effort:** ${task.estimatedHours}h`,
            `**Status:** ${task.status}`,
            ''
          );
    
          if (task.description) {
            sections.push(
              `**Description:** ${task.description}`,
              ''
            );
          }
    
          if (task.dependencies.length > 0) {
            sections.push(
              `**Dependencies:** ${task.dependencies.length} items`,
              ''
            );
          }
    
          if (task.tags.length > 0) {
            sections.push(
              `**Tags:** ${task.tags.join(', ')}`,
              ''
            );
          }
    
          sections.push('---', '');
        });
    
        // Priority breakdown
        const priorityBreakdown = tasks.reduce((acc, task) => {
          acc[task.priority] = (acc[task.priority] || 0) + 1;
          return acc;
        }, {});
    
        sections.push(
          '## Summary',
          '**Priority Breakdown:**',
          ...Object.entries(priorityBreakdown).map(([priority, count]) => 
            `- ${priority}: ${count} task${(count as number) > 1 ? 's' : ''}`
          ),
          ''
        );
      }
    
      // Next steps
      sections.push(
        '## Next Steps',
        '1. Review the recommended tasks and select one to start',
        '2. Use `update_task_lifecycle` to begin work and track progress',
        '3. Use `expand_task` if any task seems too complex',
        '4. Check dependencies before starting work',
        ''
      );
    
      // Related commands
      sections.push(
        '## Related Commands',
        '- `update_task_lifecycle` - Start work and track progress',
        '- `expand_task` - Break down complex tasks',
        '- `analyze_task_complexity` - Get detailed complexity analysis',
        '- `add_feature` - Add new features if no suitable tasks available'
      );
    
      return sections.join('\n');
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool provides 'AI-powered recommendations' but doesn't clarify key behavioral aspects: whether it's a read-only operation, how it handles missing data, if it requires specific permissions, what the output format looks like, or any rate limits. For a tool with 10 parameters and no output schema, this lack of detail is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the core purpose without unnecessary words. It's front-loaded with the main action ('Get AI-powered recommendations') and includes key contextual elements. However, it could be slightly more concise by avoiding the repetition of 'based on' phrasing, but overall it's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, no annotations, no output schema), the description is incomplete. It lacks details on behavioral traits, parameter meanings, output format, and usage guidelines. While it states the purpose clearly, it doesn't provide enough context for an agent to effectively select and invoke this tool, especially compared to siblings with similar functions like 'plan_sprint'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, meaning none of the 10 parameters are documented in the schema. The description mentions general factors like 'priorities, dependencies, team capacity, and current project state', which loosely map to some parameters (e.g., 'teamSkills', 'sprintCapacity', 'currentPhase'), but it doesn't explain what individual parameters do, their expected formats, or how they influence the recommendations. This fails to compensate for the poor schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get AI-powered recommendations for the next task to work on based on priorities, dependencies, team capacity, and current project state.' It specifies the verb ('Get AI-powered recommendations') and resource ('next task'), and mentions key factors like priorities and dependencies. However, it doesn't explicitly distinguish this from sibling tools like 'plan_sprint' or 'analyze_task_complexity', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions factors like 'priorities, dependencies, team capacity, and current project state' but doesn't specify scenarios where this tool is preferred over siblings such as 'plan_sprint' or 'get_current_iteration'. There's no mention of prerequisites, exclusions, or typical use cases, leaving the agent with minimal contextual direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kunwarVivek/mcp-github-project-manager'

If you have feedback or need assistance with the MCP directory API, please join our Discord server