Skip to main content
Glama

review_file

Analyze code files with Codex and Gemini reviewers to get feedback on quality, security, and best practices for Claude's consideration.

Instructions

Request a code review of a specific file from Codex and Gemini CLIs. Returns feedback from both reviewers for Claude to consider.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filePathYesPath to the file to review
contextNoAdditional context about the code (optional)
reviewersNoWhich reviewers to use (default: both)

Implementation Reference

  • The primary handler function for the 'review_file' tool. Reads the file content using fs.readFile, invokes performReview with context, and returns formatted review feedback.
    private async handleReviewFile(args: CodeReviewRequest) {
      const { filePath, context, reviewers = ["both"] } = args;
    
      if (!filePath) {
        throw new Error("File path is required");
      }
    
      const code = await fs.readFile(filePath, "utf-8");
      const reviews = await this.performReview(
        code,
        `File: ${filePath}\n${context || ""}`,
        reviewers
      );
    
      return {
        content: [
          {
            type: "text",
            text: this.formatReviews(reviews),
          },
        ],
      };
    }
  • TypeScript interface defining the input parameters for review tools, including filePath required for review_file.
    interface CodeReviewRequest {
      filePath?: string;
      directory?: string;
      code?: string;
      reviewers?: string[];
      context?: string;
    }
  • JSON schema in the tool definition specifying input validation for review_file: requires filePath, optional context and reviewers.
    inputSchema: {
      type: "object",
      properties: {
        filePath: {
          type: "string",
          description: "Path to the file to review",
        },
        context: {
          type: "string",
          description: "Additional context about the code (optional)",
        },
        reviewers: {
          type: "array",
          items: {
            type: "string",
            enum: ["codex", "gemini", "both"],
          },
          description: "Which reviewers to use (default: both)",
        },
      },
      required: ["filePath"],
    },
  • src/index.ts:214-240 (registration)
    Registration of the 'review_file' tool in the getTools() method, which is returned by the ListTools handler.
    {
      name: "review_file",
      description:
        "Request a code review of a specific file from Codex and Gemini CLIs. Returns feedback from both reviewers for Claude to consider.",
      inputSchema: {
        type: "object",
        properties: {
          filePath: {
            type: "string",
            description: "Path to the file to review",
          },
          context: {
            type: "string",
            description: "Additional context about the code (optional)",
          },
          reviewers: {
            type: "array",
            items: {
              type: "string",
              enum: ["codex", "gemini", "both"],
            },
            description: "Which reviewers to use (default: both)",
          },
        },
        required: ["filePath"],
      },
    },
  • Dispatcher case in the CallToolRequestSchema handler that routes 'review_file' calls to the specific handleReviewFile method.
    case "review_file":
      return await this.handleReviewFile(args as CodeReviewRequest);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool returns feedback from reviewers for Claude to consider, but lacks details on permissions, rate limits, error handling, or what the feedback format entails. For a tool that interacts with external CLIs and returns results, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that efficiently state the tool's purpose and outcome. It's front-loaded with the main action and avoids unnecessary details, though it could be slightly more structured by explicitly separating purpose from usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (interacting with multiple CLIs, returning feedback), lack of annotations, and no output schema, the description is moderately complete. It covers the basic purpose and outcome but misses behavioral details like authentication, error cases, or feedback structure. It's adequate for a minimal understanding but has clear gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (filePath, context, reviewers) with descriptions. The description adds no additional meaning beyond what the schema provides, such as explaining parameter interactions or constraints. Baseline score of 3 is appropriate when the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Request a code review of a specific file from Codex and Gemini CLIs.' It specifies the verb ('request a code review'), resource ('specific file'), and reviewers ('Codex and Gemini CLIs'), but doesn't explicitly distinguish it from sibling tools like 'review_code' or 'review_directory', which likely have different scopes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'Returns feedback from both reviewers for Claude to consider,' suggesting it's intended for Claude to process feedback. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'review_code' or 'review_directory,' nor does it specify prerequisites or exclusions, leaving the context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/je4550/review-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server