Skip to main content
Glama

Ultra Review

ultra-review

Analyze code for bugs, security, performance, style, and architecture issues using step-by-step workflow review with multiple AI providers.

Instructions

Comprehensive code review with step-by-step workflow analysis

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
taskYesWhat to review in the code
filesNoFile paths to review (optional)
focusNoReview focus areaall
providerNoAI provider to use
modelNoSpecific model to use
stepNumberNoCurrent step in the review workflow
totalStepsNoEstimated total steps needed
findingsNoAccumulated findings from the review
nextStepRequiredNoWhether another step is needed
confidenceNoConfidence level in findings
filesCheckedNoFiles examined during review
issuesFoundNoIssues identified during review

Implementation Reference

  • src/server.ts:395-403 (registration)
    Registers the 'ultra-review' MCP tool, specifying title, description, input schema from CodeReviewSchema, and handler instantiation.
    server.registerTool("ultra-review", {
      title: "Ultra Review", 
      description: "Comprehensive code review with step-by-step workflow analysis",
      inputSchema: CodeReviewSchema.shape,
    }, async (args) => {
      const { AdvancedToolsHandler } = await import("./handlers/advanced-tools");
      const handler = new AdvancedToolsHandler();
      return await handler.handleCodeReview(args);
    });
  • Core handler function executing the 'ultra-review' tool logic: parses input, selects AI provider, builds workflow context based on step, generates analysis, formats multi-step response.
      async handleCodeReview(args: unknown): Promise<HandlerResponse> {
        const params = CodeReviewSchema.parse(args);
        const { provider: requestedProvider, model: requestedModel, stepNumber, totalSteps, nextStepRequired, confidence, findings, files, focus, task, filesChecked, issuesFound } = params;
        
        const config = await this.configManager.getConfig();
        const providerName = requestedProvider || await this.providerManager.getPreferredProvider();
        const provider = await this.providerManager.getProvider(providerName);
        
        if (!provider) {
          throw new Error('No AI provider configured. Please run: bunx ultra-mcp config');
        }
    
        try {
          // Build context based on step
          let context = '';
          let requiredActions: string[] = [];
          
          if (stepNumber === 1) {
            context = `You are performing a comprehensive code review focused on ${focus}.
            
    Task: ${task}
    ${files ? `Files to review: ${files.join(', ')}` : 'Review all relevant files in the codebase'}
    
    Please begin your systematic code review by:
    1. Understanding the code structure and purpose
    2. Identifying the main components and their interactions
    3. Looking for ${focus === 'all' ? 'any issues including bugs, security vulnerabilities, performance problems, and code quality issues' : `${focus}-related issues`}
    4. Documenting your initial findings
    
    Remember to be thorough and consider:
    - Obvious issues and bugs
    - Security implications  
    - Performance considerations
    - Code maintainability and readability
    - Architectural decisions
    - Over-engineering or unnecessary complexity`;
    
            requiredActions = [
              'Read and analyze the specified files or codebase',
              'Understand the overall architecture and design patterns',
              'Identify main components and their responsibilities',
              'Note any immediate concerns or issues',
              'Document initial observations about code quality',
            ];
          } else if (confidence === 'exploring' || confidence === 'low') {
            context = `Continue your code review investigation. You've made initial observations:
    
    ${findings}
    
    Files checked so far: ${filesChecked.join(', ')}
    Issues found: ${issuesFound.length}
    
    Now dive deeper into:
    - Specific code sections that raised concerns
    - ${focus === 'security' ? 'Security vulnerabilities like injection, XSS, authentication flaws' : ''}
    - ${focus === 'performance' ? 'Performance bottlenecks, inefficient algorithms, resource usage' : ''}
    - ${focus === 'architecture' ? 'Architectural issues, coupling, missing abstractions' : ''}
    - Edge cases and error handling
    - Code that could be simplified or refactored`;
    
            requiredActions = [
              'Examine problematic code sections in detail',
              'Verify security best practices are followed',
              'Check for performance optimization opportunities',
              'Analyze error handling and edge cases',
              'Look for code duplication and refactoring opportunities',
            ];
          } else {
            context = `Complete your code review. You've thoroughly analyzed the code:
    
    ${findings}
    
    Files reviewed: ${filesChecked.join(', ')}
    Total issues found: ${issuesFound.length}
    
    Now finalize your review by:
    - Summarizing all findings by severity
    - Providing specific recommendations for each issue
    - Highlighting any positive aspects of the code
    - Suggesting priority order for fixes`;
    
            requiredActions = [
              'Verify all identified issues are documented',
              'Ensure recommendations are actionable and specific',
              'Double-check no critical issues were missed',
              'Prepare final summary with prioritized fixes',
            ];
          }
    
          const prompt = `${context}\n\nProvide your analysis for step ${stepNumber} of ${totalSteps}.`;
          
          const fullResponse = await provider.generateText({
            prompt,
            model: requestedModel,
            temperature: 0.3,
            systemPrompt: 'Provide detailed, actionable code review feedback.',
            useSearchGrounding: false,
          });
    
    
          // TODO: Implement tracking
          // await trackUsage({
          //   tool: 'ultra-review',
          //   model: provider.getActiveModel(),
          //   provider: provider.getName(),
          //   input_tokens: 0,
          //   output_tokens: 0,
          //   cache_tokens: 0,
          //   total_tokens: 0,
          //   has_credentials: true,
          // });
    
          const formattedResponse = formatWorkflowResponse(
            stepNumber,
            totalSteps,
            nextStepRequired && confidence !== 'certain',
            fullResponse.text,
            requiredActions
          );
    
          return {
            content: [{ type: 'text', text: formattedResponse }],
          };
        } catch (error) {
          logger.error('Code review failed:', error);
          throw error;
        }
      }
  • Zod input schema for 'ultra-review' tool, defining parameters like task, focus areas, provider, and multi-step workflow state (stepNumber, findings, confidence, etc.).
    export const CodeReviewSchema = z.object({
      task: z.string().describe('What to review in the code'),
      files: z.array(z.string()).optional().describe('File paths to review (optional)'),
      focus: z.enum(['bugs', 'security', 'performance', 'style', 'architecture', 'all']).default('all')
        .describe('Review focus area'),
      provider: z.enum(['openai', 'gemini', 'azure', 'grok']).optional()
        .describe('AI provider to use'),
      model: z.string().optional().describe('Specific model to use'),
      
      // Workflow fields
      stepNumber: z.number().min(1).default(1).describe('Current step in the review workflow'),
      totalSteps: z.number().min(1).default(3).describe('Estimated total steps needed'),
      findings: z.string().default('').describe('Accumulated findings from the review'),
      nextStepRequired: z.boolean().default(true).describe('Whether another step is needed'),
      confidence: z.enum(['exploring', 'low', 'medium', 'high', 'very_high', 'almost_certain', 'certain'])
        .optional().describe('Confidence level in findings'),
      filesChecked: z.array(z.string()).default([]).describe('Files examined during review'),
      issuesFound: z.array(z.object({
        severity: z.enum(['critical', 'high', 'medium', 'low']),
        description: z.string(),
        location: z.string().optional(),
      })).default([]).describe('Issues identified during review'),
    });
  • AdvancedToolsHandler.handle() method dispatches 'ultra-review' method calls to the specific handleCodeReview implementation.
    async handle(request: { method: string; params: { arguments: unknown } }): Promise<CallToolResult> {
      const { method, params } = request;
    
      switch (method) {
        case 'ultra-review':
          return await this.handleCodeReview(params.arguments);
        case 'ultra-analyze':
          return await this.handleCodeAnalysis(params.arguments);
        case 'ultra-debug':
          return await this.handleDebug(params.arguments);
        case 'ultra-plan':
          return await this.handlePlan(params.arguments);
        case 'ultra-docs':
          return await this.handleDocs(params.arguments);
        default:
          throw new Error(`Unknown method: ${method}`);
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While 'Comprehensive code review' implies a read-only analysis operation, it doesn't disclose behavioral traits like whether it modifies code, requires authentication, has rate limits, returns structured findings, or handles pagination. The mention of 'step-by-step workflow analysis' hints at iterative behavior but lacks specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place by conveying comprehensiveness and workflow analysis. However, it could be slightly more structured by explicitly mentioning key capabilities or limitations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 12 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain the iterative nature implied by workflow parameters, what 'comprehensive' entails, how findings are returned, or any prerequisites. The agent would struggle to use this effectively without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 12 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema, such as explaining relationships between parameters like 'stepNumber' and 'totalSteps' or how 'findings' accumulates. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'Comprehensive code review with step-by-step workflow analysis,' which is a specific verb+resource combination. It distinguishes from obvious siblings like 'review-code' by emphasizing comprehensiveness and workflow analysis, though it doesn't explicitly contrast with all similar tools like 'analyze-code' or 'secaudit'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling tools like 'review-code,' 'analyze-code,' 'secaudit,' and 'ultra-analyze,' there's no indication of what makes 'ultra-review' distinct or when it's preferred over other code analysis tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server