Skip to main content
Glama

design_critique

Analyze design documents, UI/UX mockups, or architectural diagrams to identify usability issues, accessibility concerns, aesthetic inconsistencies, and potential design flaws.

Instructions

Offers a critique of a design document, UI/UX mockup, or architectural diagram, focusing on usability, aesthetics, consistency, accessibility, and potential design flaws.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
design_documentYesA description or URL to the design document/image
design_typeYesType of design (e.g., 'web UI', 'system architecture', 'mobile app')

Implementation Reference

  • The primary handler function for the 'design_critique' tool. It validates input arguments, sanitizes them, constructs a specialized prompt based on design type, calls the Deepseek API, and returns the critique response.
    export async function handler(args: unknown): Promise<ToolResponse> {
      // Check rate limit first
      if (!checkRateLimit()) {
        return {
          content: [
            {
              type: 'text',
              text: 'Rate limit exceeded. Please try again later.',
            },
          ],
          isError: true,
        };
      }
    
      // Validate arguments
      if (!args || typeof args !== 'object') {
        return {
          content: [
            {
              type: 'text',
              text: 'Invalid arguments provided.',
            },
          ],
          isError: true,
        };
      }
    
      try {
        // Type guard for DesignCritiqueArgs
        if (!('design_document' in args) || !('design_type' in args) ||
            typeof args.design_document !== 'string' || typeof args.design_type !== 'string') {
          return {
            content: [
              {
                type: 'text',
                text: 'Both design_document and design_type are required and must be strings.',
              },
            ],
            isError: true,
          };
        }
    
        const typedArgs = args as DesignCritiqueArgs;
    
        // Sanitize inputs
        const sanitizedDocument = sanitizeInput(typedArgs.design_document);
        const sanitizedType = sanitizeInput(typedArgs.design_type.toLowerCase());
    
        // Get type-specific prompts
        const typePrompts = DESIGN_TYPE_PROMPTS[sanitizedType] || DESIGN_TYPE_PROMPTS.default;
    
        // Create the complete prompt
        const prompt = createPrompt(
          {
            ...BASE_PROMPT_TEMPLATE,
            template: BASE_PROMPT_TEMPLATE.template.replace(
              '{type_specific_prompts}',
              typePrompts
            ),
          },
          {
            design_type: sanitizedType,
            design_document: sanitizedDocument,
          }
        );
    
        // Make the API call
        const response = await makeDeepseekAPICall(prompt, SYSTEM_PROMPT);
    
        if (response.isError) {
          return {
            content: [
              {
                type: 'text',
                text: `Error generating design critique: ${response.errorMessage || 'Unknown error'}`,
              },
            ],
            isError: true,
          };
        }
    
        // Return the formatted response
        return {
          content: [
            {
              type: 'text',
              text: response.text,
            },
          ],
        };
      } catch (error) {
        console.error('Design critique tool error:', error);
        return {
          content: [
            {
              type: 'text',
              text: `Error processing design critique: ${error instanceof Error ? error.message : 'Unknown error'}`,
            },
          ],
          isError: true,
        };
      }
    }
  • Tool definition including the input schema for validating arguments: design_document (string) and design_type (string). This schema is used for tool registration.
    export const definition: ToolDefinition = {
      name: 'design_critique',
      description: 'Offers a critique of a design document, UI/UX mockup, or architectural diagram, focusing on usability, aesthetics, consistency, accessibility, and potential design flaws.',
      inputSchema: {
        type: 'object',
        properties: {
          design_document: {
            type: 'string',
            description: 'A description or URL to the design document/image',
          },
          design_type: {
            type: 'string',
            description: "Type of design (e.g., 'web UI', 'system architecture', 'mobile app')",
          },
        },
        required: ['design_document', 'design_type'],
      },
    };
  • src/server.ts:56-64 (registration)
    Registration of the design_critique tool in the MCP server's listTools handler by including designCritique.definition in the tools array.
    this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        secondOpinion.definition,
        codeReview.definition,
        designCritique.definition,
        writingFeedback.definition,
        brainstormEnhancements.definition,
      ],
    }));
  • src/server.ts:103-111 (registration)
    Handler dispatch for 'design_critique' tool in the callTool request: validates args with isDesignCritiqueArgs and invokes designCritique.handler.
    case "design_critique": {
      if (!isDesignCritiqueArgs(args)) {
        throw new McpError(
          ErrorCode.InvalidParams,
          "Invalid parameters for design critique"
        );
      }
      response = await designCritique.handler(args);
      break;
  • TypeScript interface and type guard for DesignCritiqueArgs used for input validation in server.ts.
    export interface DesignCritiqueArgs {
      design_document: string;
      design_type: string;
    }
    
    export interface WritingFeedbackArgs {
      text: string;
      writing_type: string;
    }
    
    export interface BrainstormEnhancementsArgs {
      concept: string;
    }
    
    // Type guard for tool arguments
    export function isValidToolArgs(args: Record<string, unknown> | undefined, required: string[]): boolean {
      if (!args) return false;
      return required.every(key => key in args && args[key] !== undefined);
    }
    
    // Type guards for specific tool arguments
    export function isCodeReviewArgs(args: unknown): args is CodeReviewArgs {
      if (!args || typeof args !== 'object') return false;
      const a = args as Record<string, unknown>;
      
      // Must have either file_path or code_snippet
      const hasFilePath = 'file_path' in a && (typeof a.file_path === 'string' || a.file_path === undefined);
      const hasCodeSnippet = 'code_snippet' in a && (typeof a.code_snippet === 'string' || a.code_snippet === undefined);
      const hasLanguage = 'language' in a && typeof a.language === 'string';
      
      return hasLanguage && (hasFilePath || hasCodeSnippet);
    }
    
    export function isDesignCritiqueArgs(args: unknown): args is DesignCritiqueArgs {
      if (!args || typeof args !== 'object') return false;
      const a = args as Record<string, unknown>;
      
      return 'design_document' in a && 'design_type' in a &&
             typeof a.design_document === 'string' && typeof a.design_type === 'string';
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the tool 'offers a critique' but doesn't disclose behavioral traits such as output format, depth of analysis, whether it's automated or human-like, potential limitations, or how it handles different design types. For a critique tool with zero annotation coverage, this leaves significant gaps in understanding its operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key details without waste. It clearly states what the tool does, the input types, and focus areas, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a critique tool (which could involve subjective analysis), no annotations, no output schema, and 2 parameters with full schema coverage, the description is incomplete. It doesn't explain what the critique output looks like, any limitations, or how it integrates with sibling tools. For a tool that provides feedback, more context on behavior and results is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('design_document' and 'design_type') with descriptions. The description adds no additional meaning beyond what the schema provides, such as examples or constraints for parameter values. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Offers a critique' of design artifacts, specifying the types (document, mockup, diagram) and focus areas (usability, aesthetics, consistency, accessibility, flaws). It distinguishes from siblings like 'brainstorm_enhancements' by focusing on critique rather than ideation, but doesn't explicitly name alternatives. This is clear but lacks explicit sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the specified design types and focus areas, suggesting it's for evaluating design quality. However, it doesn't explicitly state when to use this tool versus alternatives like 'second_opinion' (which might overlap) or 'code_review' (for code). No guidance on prerequisites or exclusions is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cyanheads/mentor-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server