Skip to main content
Glama

get_code_review_context

Analyze code changes from Jules AI sessions to generate structured summaries of modifications, organized by file with change types and line counts for efficient review.

Instructions

Review code changes from a Jules session. Returns a structured summary of what changed, organized by file with change types, line counts, and activity IDs. Automatically detects if session is busy (aggregates from activities) or stable (uses final outcome). Can optionally scope to a single activity. For detailed diffs, use show_code_diff.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdYesThe Jules session ID to review.
activityIdNoOptional activity ID to review changes from a single activity instead of the whole session.
formatNoOutput format: summary (default) for overview with stats, tree for directory structure, detailed for full file list, markdown for full session report.
filterNoFilter by change type. Defaults to all.
detailNoDetail level: minimal (files only), standard (default, + insights/timing), full (+ activity counts).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behaviors: it explains how it handles busy vs. stable sessions, returns structured summaries organized by file, and includes change types and line counts. However, it doesn't mention potential limitations like rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key features and a clear alternative. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, 100% schema coverage, and no output schema, the description is mostly complete. It explains the tool's behavior and usage context well, but could benefit from more detail on output structure or error handling to fully compensate for the lack of annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so the baseline is 3. The description adds some context by mentioning 'automatically detects if session is busy or stable' and 'optionally scope to a single activity', which loosely relates to parameters but doesn't provide additional syntax or format details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Review code changes', 'Returns a structured summary') and resources ('Jules session'). It distinguishes from sibling tools by mentioning 'show_code_diff' for detailed diffs, making its scope explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: it states 'For detailed diffs, use show_code_diff' and explains when to scope to a single activity vs. the whole session. This gives clear context for selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/streetquant/jules-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server