Skip to main content
Glama

reflect_task

Evaluate and optimize technical analysis results for completeness and alignment with best practices, using high-level pseudocode where necessary to outline logic flow and key steps.

Instructions

Critically review analysis results, evaluate solution completeness and identify optimization opportunities, ensuring the solution aligns with best practices. If code is needed, use pseudocode format providing only high-level logic flow and key steps, avoiding complete code.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
analysisYesComprehensive technical analysis results, including all technical details, dependent components and implementation plans, if code is needed use pseudocode format and only provide high-level logic flow and key steps avoiding complete code
summaryYesStructured task summary, keeping consistent with the analysis phase to ensure continuity

Implementation Reference

  • The core handler function for the 'reflect_task' tool. It receives summary and analysis inputs, generates a specialized reflection prompt using getReflectTaskPrompt, and returns it as structured content for the AI model.
    export async function reflectTask({
      summary,
      analysis,
    }: z.infer<typeof reflectTaskSchema>) {
      // Use prompt generator to get the final prompt
      const prompt = getReflectTaskPrompt({
        summary,
        analysis,
      });
    
      return {
        content: [
          {
            type: "text" as const,
            text: prompt,
          },
        ],
      };
    }
  • Zod schema validating the input arguments for reflect_task: 'summary' (min 10 chars) and 'analysis' (min 100 chars). Used for input parsing in the handler and tool registration.
    export const reflectTaskSchema = z.object({
      summary: z
        .string()
        .min(10, {
          message: "Task summary cannot be less than 10 characters, please provide a more detailed description to ensure clear task objectives",
        })
        .describe("Structured task summary, keeping consistent with the analysis phase to ensure continuity"),
      analysis: z
        .string()
        .min(100, {
          message: "Technical analysis content is not detailed enough, please provide complete technical analysis and implementation plan",
        })
        .describe(
          "Comprehensive technical analysis results, including all technical details, dependent components and implementation plans, if code is needed use pseudocode format and only provide high-level logic flow and key steps avoiding complete code"
        ),
    });
  • src/index.ts:242-247 (registration)
    Tool registration in the MCP server's ListToolsRequestHandler. Registers 'reflect_task' with its description from template and input schema converted to JSON schema.
      name: "reflect_task",
      description: loadPromptFromTemplate(
        "toolsDescription/reflectTask.md"
      ),
      inputSchema: zodToJsonSchema(reflectTaskSchema),
    },
  • src/index.ts:408-418 (registration)
    Tool invocation handler in the MCP server's CallToolRequestHandler switch statement. Validates arguments using reflectTaskSchema and calls the reflectTask function.
    case "reflect_task":
      parsedArgs = await reflectTaskSchema.safeParseAsync(
        request.params.arguments
      );
      if (!parsedArgs.success) {
        throw new Error(
          `Invalid arguments for tool ${request.params.name}: ${parsedArgs.error.message}`
        );
      }
      result = await reflectTask(parsedArgs.data);
      return result;
  • Helper function that generates the reflection prompt by loading a template, interpolating summary and analysis parameters, and applying custom prompt overrides if available.
    export function getReflectTaskPrompt(params: ReflectTaskPromptParams): string {
      const indexTemplate = loadPromptFromTemplate("reflectTask/index.md");
      const prompt = generatePrompt(indexTemplate, {
        summary: params.summary,
        analysis: params.analysis,
      });
    
      // Load possible custom prompt
      return loadPrompt(prompt, "REFLECT_TASK");
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the tool 'critically reviews' and 'evaluates,' it doesn't describe what the tool actually does behaviorally - does it return suggestions, a score, or modified analysis? The pseudocode guidance is useful but doesn't explain the tool's output or operational characteristics like error handling or performance constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that each serve a clear purpose. The first sentence states the tool's core function, and the second provides important implementation guidance about pseudocode usage. There's no unnecessary repetition or verbose language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there are no annotations and no output schema, the description is incomplete for a tool with 2 required parameters. It doesn't explain what the tool returns (suggestions, validation results, etc.), nor does it provide behavioral context about how the review process works. The pseudocode guidance is helpful but doesn't compensate for the missing output and behavioral information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any meaningful parameter semantics beyond what's in the schema - it mentions 'analysis results' and 'solution completeness' which align with the schema's 'analysis' and 'summary' parameters but provide no additional context about how these parameters should be structured or used.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Critically review analysis results, evaluate solution completeness and identify optimization opportunities, ensuring the solution aligns with best practices.' It specifies the verb 'review' and the resource 'analysis results' with additional objectives. However, it doesn't explicitly differentiate from sibling tools like 'analyze_task' or 'verify_task', which may have overlapping review functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage context by specifying 'If code is needed, use pseudocode format...', which suggests this tool is for post-analysis review rather than initial analysis. However, it lacks explicit guidance on when to use this tool versus alternatives like 'verify_task' or 'analyze_task', and doesn't mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/liorfranko/mcp-chain-of-thought'

If you have feedback or need assistance with the MCP directory API, please join our Discord server