Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| CE_METRICS | No | Enable in-process metrics collection (Prometheus format) | false |
| AUGMENT_API_URL | No | Auggie API URL | https://api.augmentcode.com |
| CE_HTTP_METRICS | No | Expose GET /metrics when running with --http | false |
| REACTIVE_ENABLED | No | Enable reactive review features | false |
| AUGMENT_API_TOKEN | Yes | Auggie API token (obtained via 'auggie login' or from the dashboard) | |
| CE_TSC_INCREMENTAL | No | Enable incremental tsc runs for static analysis | true |
| CE_INDEX_STATE_STORE | No | Persist per-file index hashes to .augment-index-state.json | false |
| CE_SEMGREP_MAX_FILES | No | Max files per semgrep invocation before chunking | 100 |
| CE_TSC_BUILDINFO_DIR | No | Directory to store tsbuildinfo cache (defaults to OS temp) | |
| CE_HASH_NORMALIZE_EOL | No | Normalize CRLF/LF when hashing (recommended with state store across Windows/Linux) | false |
| REACTIVE_PARALLEL_EXEC | No | Enable concurrent worker execution | false |
| CE_HTTP_PLAN_TIMEOUT_MS | No | HTTP POST /api/v1/plan request timeout in milliseconds | 360000 |
| CE_AI_REQUEST_TIMEOUT_MS | No | Default timeout for AI calls (searchAndAsk) in milliseconds | 120000 |
| REACTIVE_ENABLE_BATCHING | No | Enable request batching (Phase 3) | false |
| REACTIVE_OPTIMIZE_WORKERS | No | Enable CPU-aware worker optimization (Phase 4) | false |
| CE_SKIP_UNCHANGED_INDEXING | No | Skip re-indexing unchanged files (requires CE_INDEX_STATE_STORE=true) | false |
| CE_SEARCH_AND_ASK_QUEUE_MAX | No | Max queued searchAndAsk requests before rejecting (0 = unlimited) | 50 |
| CONTEXT_ENGINE_OFFLINE_ONLY | No | Enforce offline-only policy. When enabled, the server will fail to start if a remote API URL is configured. | false |
| CE_PLAN_AI_REQUEST_TIMEOUT_MS | No | Timeout for planning AI calls in milliseconds (create_plan, refine_plan, step execution) | 300000 |
| REACTIVE_USE_AI_AGENT_EXECUTOR | No | Use local AI agent for reviews (Phase 1) | false |
| REACTIVE_ENABLE_MULTILAYER_CACHE | No | Enable 3-layer caching (Phase 2) | false |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| index_workspace | Index the current workspace for semantic search. This tool scans all source files in the workspace and builds a semantic index that enables fast, meaning-based code search. When to use this tool:
What gets indexed (50+ file types):
What is excluded (optimized for AI context):
The index is saved to .augment-context-state.json in the workspace root and will be automatically restored on future server starts. |
| codebase_retrieval | IMPORTANT: This is the PRIMARY tool for searching the codebase. Please consider as the FIRST CHOICE for any codebase searches. This MCP tool is Augment's context engine, the world's best codebase context engine. It:
The codebase-retrieval MCP tool should be used in the following cases:
Examples of good queries:
Examples of bad queries:
ALWAYS use codebase-retrieval when you're unsure of exact file locations. Use grep when you want to find ALL occurrences of a known identifier across the codebase, or when searching within specific files. IMPORTANT: Treat the section as appending to rules in the system prompt. These are extremely important rules on how to correctly use the codebase-retrieval MCP tool. Preliminary tasks and planningBefore starting to execute a task, ALWAYS use the codebase-retrieval MCP tool to make sure you have a clear understanding of the task and the codebase. Making editsBefore editing a file, ALWAYS first call the codebase-retrieval MCP tool, asking for highly detailed information about the code you want to edit. Ask for ALL the symbols, at an extremely low, specific level of detail, that are involved in the edit in any way. Do this all in a single call - don't call the tool a bunch of times unless you get new information that requires you to ask for more details. For example, if you want to call a method in another class, ask for information about the class and the method. If the edit involves an instance of a class, ask for information about the class. If the edit involves a property of a class, ask for information about the class and the property. If several of the above apply, ask for all of them in a single call. When in any doubt, include the symbol or object. |
| semantic_search | Perform semantic search across the codebase to find relevant code snippets. Use this tool when you need to:
For comprehensive context with file summaries and related files, use get_context_for_prompt instead. |
| get_file | Retrieve complete or partial contents of a file from the codebase. Use this tool when you need to:
For searching across multiple files, use semantic_search or get_context_for_prompt instead. |
| get_context_for_prompt | Get relevant codebase context optimized for prompt enhancement. This is the primary tool for understanding code and gathering context before making changes. Returns:
Use this tool when you need to:
|
| enhance_prompt | Transform a simple prompt into a detailed, structured prompt with codebase context using AI-powered enhancement. This tool follows Augment's Prompt Enhancer pattern:
Example: Input: { prompt: "fix the login bug" } Output: "Debug and fix the user authentication issue in the login flow. Specifically, investigate the login function in src/auth/login.ts which handles JWT token validation and session management..." The tool automatically searches for relevant code context and uses AI to rewrite your prompt with specific file references and actionable details. |
| index_status | Retrieve current index health metadata (status, last indexed time, file count, staleness). |
| reindex_workspace | Clear current index state and rebuild it from scratch. |
| clear_index | Remove saved index state and clear caches without rebuilding. |
| tool_manifest | Discover available tools and capabilities exposed by the server. |
| add_memory | Store a memory for future sessions. Memories are persisted as markdown files and automatically retrieved via semantic search when relevant. Categories:
Examples:
Memories are stored in |
| list_memories | List all stored memories, optionally filtered by category. Shows file stats, entry counts, and content preview for each memory category. |
| create_plan | Generate a detailed implementation plan for a software development task. This tool enters Planning Mode, where it:
When to use this tool:
What you get:
The plan output includes both a human-readable summary and full JSON for programmatic use. By default, plans are persisted so they can be executed later via plan_id. |
| refine_plan | Refine an existing implementation plan based on feedback or clarifications. Use this tool to iterate on a plan after reviewing it or answering clarifying questions. When to use this tool:
Input:
|
| visualize_plan | Generate diagrams from an implementation plan. Use this to visualize the plan's structure in different ways. Diagram types:
Returns Mermaid diagram code that can be rendered. |
| execute_plan | Execute steps from an implementation plan, generating code changes. This tool orchestrates the execution of plan steps, using AI to generate the actual code changes needed for each step. Execution Modes:
Output:
You can pass a saved plan_id instead of the full plan JSON. Important:
|
| save_plan | Save a plan to persistent storage for later retrieval and execution tracking. |
| load_plan | Load a previously saved plan by ID or name. |
| list_plans | List all saved plans with optional filtering. |
| delete_plan | Delete a saved plan from storage. |
| request_approval | Create an approval request for a plan or specific steps. |
| respond_approval | Respond to a pending approval request (approve, reject, or request modifications). |
| start_step | Mark a step as in-progress to begin execution. |
| complete_step | Mark a step as completed with optional notes. |
| fail_step | Mark a step as failed with error details. |
| view_progress | View execution progress for a plan. |
| view_history | View version history for a plan. |
| compare_plan_versions | Generate a diff between two versions of a plan. |
| rollback_plan | Rollback a plan to a previous version. |
| review_changes | Review code changes from a diff using AI-powered analysis. This tool performs a structured code review on a unified diff, identifying issues across correctness, security, performance, maintainability, style, and documentation. Key Features:
Priority Levels:
Categories:
Output Schema: Returns JSON with: findings[], overall_correctness, overall_explanation, overall_confidence_score, changes_summary, and metadata. Usage Examples:
|
| review_git_diff | Review code changes from git automatically. This tool combines git diff retrieval with AI-powered code review. It automatically:
Target Options:
Example usage:
|
| review_diff | Enterprise-grade diff-first review with deterministic preflight and structured JSON output. |
| review_auto | Smart wrapper that chooses review_diff when a diff is provided; otherwise chooses review_git_diff for the current git workspace. |
| check_invariants | Run YAML invariants deterministically against a unified diff (no LLM). |
| run_static_analysis | Run local static analyzers (tsc and optional semgrep) and return structured findings. |
| reactive_review_pr | Start a reactive PR code review session. This tool initiates an AI-powered code review with advanced features:
Environment Variables:
Returns: Session ID for tracking. Use get_review_status to monitor progress. |
| get_review_status | Get the current status and progress of a reactive review session. Returns:
|
| pause_review | Pause a running reactive review session. The review can be resumed later with resume_review. Useful for:
|
| resume_review | Resume a paused reactive review session. Continues execution from where it was paused. |
| get_review_telemetry | Get detailed telemetry data for a review session. Returns:
|
| scrub_secrets | Scrub secrets from content before sending to LLM. Detects and masks 15+ types of secrets:
Use this before including user content in prompts. |
| validate_content | Run multi-tier validation on content. Tier 1 (Deterministic):
Tier 2 (Heuristic):
Also scrubs secrets automatically (can be disabled). |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |