Skip to main content
Glama
Kirachon

Context Engine MCP Server

by Kirachon

execute_plan

Execute implementation plan steps to generate and apply code changes using AI, with modes for single steps, ready steps, or full plan execution.

Instructions

Execute steps from an implementation plan, generating code changes.

This tool orchestrates the execution of plan steps, using AI to generate the actual code changes needed for each step.

Execution Modes:

  • single_step: Execute a specific step by number (requires step_number)

  • all_ready: Execute all steps whose dependencies are satisfied

  • full_plan: Execute steps in dependency order (respects max_steps limit)

Output:

  • Generated code changes for each step (preview by default)

  • Success/failure status for each step

  • Next steps that are ready to execute

  • Overall progress tracking

You can pass a saved plan_id instead of the full plan JSON.

Important:

  • By default, changes are shown as preview only (apply_changes=false)

  • Set apply_changes=true to actually write the generated code to files

  • Use stop_on_failure=true (default) to halt on first error

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
planNoThe plan as a JSON string (from create_plan output). Optional if plan_id is provided.
plan_idNoPlan ID to load from saved plans (alternative to providing plan JSON)
modeNoExecution mode (default: single_step)single_step
step_numberNoStep number to execute (required for single_step mode)
apply_changesNoWhether to apply changes to files (default: false - preview only)
max_stepsNoMaximum steps to execute in one call (default: 5)
stop_on_failureNoWhether to stop on first failure (default: true)
additional_contextNoAdditional context to provide to the AI for code generation
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does so effectively. It explains critical behavioral traits: default preview-only execution (apply_changes=false), failure handling (stop_on_failure=true), dependency-aware execution, and progress tracking. It also mentions the ability to load saved plans via plan_id. The only minor gap is lack of explicit mention about permissions or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It begins with a clear purpose statement, then organizes information into logical sections (Execution Modes, Output, Important notes). Every sentence adds value - there's no redundant or unnecessary information. The bullet points make key information easily scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 8 parameters, no annotations, and no output schema, the description does an excellent job of providing context. It explains execution modes, output format, critical behavioral defaults, and parameter relationships. The only gap is that without an output schema, the description could provide more detail about the exact structure of the returned data (though it does list what information will be included).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds some value by explaining the relationship between mode and step_number ('single_step: Execute a specific step by number (requires step_number)'), clarifying the plan/plan_id alternative ('You can pass a saved plan_id instead of the full plan JSON'), and emphasizing the default behavior of apply_changes. However, it doesn't provide significant additional semantics beyond what's already well-documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Execute steps from an implementation plan, generating code changes.' It specifies the verb ('execute'), resource ('steps from an implementation plan'), and outcome ('generating code changes'), distinguishing it from sibling tools like create_plan, refine_plan, or visualize_plan which handle planning rather than execution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage through the 'Execution Modes' section, explaining when to use single_step, all_ready, or full_plan modes. It mentions alternatives like using plan_id instead of plan JSON. However, it doesn't explicitly state when NOT to use this tool or compare it directly to sibling tools like complete_step or start_step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Kirachon/context-engine'

If you have feedback or need assistance with the MCP directory API, please join our Discord server