Skip to main content
Glama
kingdomseed

Structured Workflow MCP

by kingdomseed

compare_analyze_guidance

Evaluate and compare different approaches during development to select optimal solutions based on structured analysis criteria.

Instructions

Get guidance for the COMPARE/ANALYZE phase - evaluating approaches

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP server request handler dispatches compare_analyze_guidance tool calls to handlePhaseGuidance function which returns phase-specific guidance JSON
    case 'setup_guidance':
    case 'audit_inventory_guidance':
    case 'compare_analyze_guidance':
    case 'question_determine_guidance':
    case 'refactor_guidance':
    case 'lint_guidance':
    case 'iterate_guidance':
    case 'present_guidance':
      return {
        content: [{
          type: 'text',
          text: JSON.stringify(await handlePhaseGuidance(name, sessionManager), null, 2)
        }]
      };
  • Core handler function for all phase guidance tools including compare_analyze_guidance; routes to directive or suggestive mode based on session configuration
    export async function handlePhaseGuidance(
      phaseName: string,
      sessionManager: SessionManager
    ): Promise<PhaseGuidance> {
      const session = sessionManager.getSession();
      const isDirectiveMode = session?.workflowConfig !== undefined;
      
      // Route to appropriate guidance based on mode
      if (isDirectiveMode) {
        return getDirectiveGuidance(phaseName, sessionManager);
      } else {
        return getSuggestiveGuidance(phaseName, sessionManager);
      }
    }
  • Input schema definition for the compare_analyze_guidance tool (empty object since no parameters required)
    {
      name: 'compare_analyze_guidance',
      description: 'Get guidance for the COMPARE/ANALYZE phase - evaluating approaches',
      inputSchema: { type: 'object', properties: {} }
    },
  • Function that creates and returns the Tool[] array including compare_analyze_guidance; called during server initialization
    export function createPhaseGuidanceTools(): Tool[] {
      const phaseTools: Tool[] = [
        {
          name: 'setup_guidance',
          description: 'Get guidance for the SETUP phase - initialize workflow and establish patterns',
          inputSchema: { type: 'object', properties: {} }
        },
        {
          name: 'audit_inventory_guidance',
          description: 'Get guidance for the AUDIT_INVENTORY phase - analyze code and catalog changes',
          inputSchema: { type: 'object', properties: {} }
        },
        {
          name: 'compare_analyze_guidance',
          description: 'Get guidance for the COMPARE/ANALYZE phase - evaluating approaches',
          inputSchema: { type: 'object', properties: {} }
        },
        {
          name: 'question_determine_guidance',
          description: 'Get guidance for the QUESTION_DETERMINE phase - clarify and finalize plan',
          inputSchema: { type: 'object', properties: {} }
        },
        {
          name: 'refactor_guidance',
          description: 'Get guidance for the WRITE/REFACTOR phase - implementing changes',
          inputSchema: { type: 'object', properties: {} }
        },
        {
          name: 'lint_guidance',
          description: 'Get guidance for the LINT phase - verifying code quality',
          inputSchema: { type: 'object', properties: {} }
        },
        {
          name: 'iterate_guidance',
          description: 'Get guidance for the ITERATE phase - fixing issues',
          inputSchema: { type: 'object', properties: {} }
        },
        {
          name: 'present_guidance',
          description: 'Get guidance for the PRESENT phase - summarizing work',
          inputSchema: { type: 'object', properties: {} }
        }
      ];
    
      return phaseTools;
    }
  • Static guidance content returned by the tool in suggestive mode, defining instructions, expected outputs, and next steps for COMPARE_ANALYZE phase
    compare_analyze_guidance: {
      phase: 'COMPARE_ANALYZE',
      objective: 'Evaluate different ways to implement the refactoring',
      instructions: [
        'Consider at least 2-3 different approaches',
        'Think about trade-offs for each approach',
        'Consider factors like complexity, risk, and maintainability',
        'Choose the approach that best fits the requirements',
        'Document why you chose your approach'
      ],
      suggestedApproach: [
        'Start with the simplest approach that could work',
        'Consider a more comprehensive approach',
        'Think about edge cases and error handling',
        'Evaluate performance implications if relevant',
        'Consider future extensibility'
      ],
      expectedOutput: {
        approaches: 'Description of each approach considered',
        prosAndCons: 'Advantages and disadvantages of each',
        recommendation: 'Your chosen approach',
        justification: 'Why this approach is best',
        alternativesIfNeeded: 'Fallback options if issues arise'
      },
      nextPhase: 'Use question_determine_guidance to clarify and finalize your strategy'
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool provides guidance but doesn't specify what form that guidance takes (e.g., text, structured data), whether it's static or dynamic, or any operational constraints like rate limits or permissions. This is a significant gap for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and wastes no space, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the guidance output entails, how it's structured, or any behavioral aspects. For a guidance tool, this leaves the agent uncertain about what to expect, reducing effectiveness in tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter information is needed. The description doesn't add parameter details, which is appropriate, and it implies the tool operates without inputs, aligning with the schema. This meets the baseline for tools with no parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Get guidance for the COMPARE/ANALYZE phase - evaluating approaches', which includes a specific verb ('Get guidance') and resource ('COMPARE/ANALYZE phase'). It distinguishes itself from siblings by focusing on a specific phase, though it doesn't explicitly differentiate from similar guidance tools like 'audit_inventory_guidance' or 'setup_guidance'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing, or how it differs from other guidance tools in the sibling list, such as 'iterate_guidance' or 'present_guidance'. This leaves the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kingdomseed/structured-workflow-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server