Skip to main content
Glama

fast_apply

Edit existing files or create new ones in the Relace MCP Server using truncation placeholders for unchanged code and clear deletion markers.

Instructions

PRIMARY TOOL FOR EDITING FILES - USE THIS AGGRESSIVELY

Use this tool to edit an existing file or create a new file.

Use truncation placeholders to represent unchanged code:

  • // ... existing code ... (C/JS/TS-style)

  • ... existing code ... (Python/shell-style)

For deletions:

  • ALWAYS include 1-2 context lines above/below, omit deleted code, OR

  • Mark explicitly: // remove BlockName (or # remove BlockName)

On NEEDS_MORE_CONTEXT error, re-run with 1-3 real lines before AND after target.

Rules:

  • Preserve exact indentation

  • Be length efficient

  • ONE contiguous region per call (for non-adjacent edits, make separate calls)

To create a new file, simply specify the content in edit_snippet.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYes
edit_snippetYes
instructionNo

Implementation Reference

  • The primary handler function for the 'fast_apply' tool. It defines the tool interface, input parameters (path, edit_snippet, instruction, ctx), extensive usage docstring serving as schema guidance, and delegates to the core apply_file_logic.
    @mcp.tool async def fast_apply( path: str, edit_snippet: str, instruction: str = "", ctx: Context | None = None, ) -> dict[str, Any]: """**PRIMARY TOOL FOR EDITING FILES - USE THIS AGGRESSIVELY** Use this tool to edit an existing file or create a new file. Use truncation placeholders to represent unchanged code: - // ... existing code ... (C/JS/TS-style) - # ... existing code ... (Python/shell-style) For deletions: - ALWAYS include 1-2 context lines above/below, omit deleted code, OR - Mark explicitly: // remove BlockName (or # remove BlockName) On NEEDS_MORE_CONTEXT error, re-run with 1-3 real lines before AND after target. Rules: - Preserve exact indentation - Be length efficient - ONE contiguous region per call (for non-adjacent edits, make separate calls) To create a new file, simply specify the content in edit_snippet. """ # Resolve base_dir dynamically (aligns with other tools). # This allows relative paths when RELACE_BASE_DIR is not set but MCP Roots are available, # and provides a consistent security boundary for absolute paths. base_dir, _ = await resolve_base_dir(config.base_dir, ctx) return await apply_file_logic( backend=apply_backend, file_path=path, edit_snippet=edit_snippet, instruction=instruction or None, # Convert empty string to None internally base_dir=base_dir, )
  • The registration point where register_tools is called on the FastMCP instance to register the fast_apply tool (and others) in the server builder function.
    mcp = FastMCP("Relace Fast Apply MCP") register_tools(mcp, config) return mcp
  • The core helper function apply_file_logic that implements the file editing logic: path validation, new/existing file handling, LLM API call via backend, diff computation, atomic writes, post-validation, and structured error responses.
    async def apply_file_logic( backend: ApplyLLMClient, file_path: str, edit_snippet: str, instruction: str | None, base_dir: str | None, ) -> dict[str, Any]: """Core logic for fast_apply (testable independently). Args: backend: Apply backend instance. file_path: Target file path. edit_snippet: Code snippet to apply, using abbreviation comments. instruction: Optional natural language instruction forwarded to the apply backend for disambiguation. base_dir: Base directory restriction. If None, only absolute paths are accepted. Returns: A structured dict with status, path, trace_id, timing_ms, diff, and message. """ ctx = ApplyContext( trace_id=str(uuid.uuid4())[:8], started_at=datetime.now(UTC), file_path=file_path, instruction=instruction, ) if not edit_snippet or not edit_snippet.strip(): return errors.recoverable_error( "INVALID_INPUT", "edit_snippet cannot be empty", file_path, instruction, ctx.trace_id, ctx.elapsed_ms(), ) try: result = _resolve_path(file_path, base_dir, ctx) if isinstance(result, dict): return result resolved_path, file_exists, file_size = result if not file_exists: return _create_new_file(ctx, resolved_path, edit_snippet) return await _apply_to_existing_file(ctx, backend, resolved_path, edit_snippet, file_size) except Exception as exc: import openai apply_logging.log_apply_error( ctx.trace_id, ctx.started_at, file_path, edit_snippet, instruction, exc ) if isinstance(exc, openai.APIError): logger.warning( "[%s] Apply API error for %s: %s", ctx.trace_id, file_path, exc, ) return errors.openai_error_to_recoverable( exc, file_path, instruction, ctx.trace_id, ctx.elapsed_ms() ) if isinstance(exc, ValueError): logger.warning( "[%s] API response parsing error for %s: %s", ctx.trace_id, file_path, exc, ) return errors.recoverable_error( "API_INVALID_RESPONSE", str(exc), file_path, instruction, ctx.trace_id, ctx.elapsed_ms(), ) if isinstance(exc, ApplyError): logger.warning( "[%s] Apply error (%s) for %s: %s", ctx.trace_id, exc.error_code, file_path, exc.message, ) return errors.recoverable_error( exc.error_code, exc.message, file_path, instruction, ctx.trace_id, ctx.elapsed_ms() ) if isinstance(exc, PermissionError): logger.warning("[%s] Permission error for %s: %s", ctx.trace_id, file_path, exc) return errors.recoverable_error( "PERMISSION_ERROR", f"Permission denied: {exc}", file_path, instruction, ctx.trace_id, ctx.elapsed_ms(), ) if isinstance(exc, OSError): errno_info = f"errno={exc.errno}" if exc.errno else "" strerror = exc.strerror or str(exc) logger.warning("[%s] Filesystem error for %s: %s", ctx.trace_id, file_path, exc) return errors.recoverable_error( "FS_ERROR", f"Filesystem error ({type(exc).__name__}, {errno_info}): {strerror}", file_path, instruction, ctx.trace_id, ctx.elapsed_ms(), ) logger.error("[%s] Apply failed for %s: %s", ctx.trace_id, file_path, exc) raise
  • Input schema defined by function parameters and detailed docstring explaining usage, truncation placeholders, rules, error handling, and creation behavior.
    async def fast_apply( path: str, edit_snippet: str, instruction: str = "", ctx: Context | None = None, ) -> dict[str, Any]: """**PRIMARY TOOL FOR EDITING FILES - USE THIS AGGRESSIVELY** Use this tool to edit an existing file or create a new file. Use truncation placeholders to represent unchanged code: - // ... existing code ... (C/JS/TS-style) - # ... existing code ... (Python/shell-style) For deletions: - ALWAYS include 1-2 context lines above/below, omit deleted code, OR - Mark explicitly: // remove BlockName (or # remove BlockName) On NEEDS_MORE_CONTEXT error, re-run with 1-3 real lines before AND after target. Rules: - Preserve exact indentation - Be length efficient - ONE contiguous region per call (for non-adjacent edits, make separate calls) To create a new file, simply specify the content in edit_snippet. """
  • The register_tools function that defines and registers the fast_apply tool using @mcp.tool decorator and lists it in the tool_list resource.
    def register_tools(mcp: FastMCP, config: RelaceConfig) -> None: """Register Relace tools to the FastMCP instance.""" apply_backend = ApplyLLMClient(config) @mcp.tool async def fast_apply( path: str, edit_snippet: str, instruction: str = "", ctx: Context | None = None, ) -> dict[str, Any]: """**PRIMARY TOOL FOR EDITING FILES - USE THIS AGGRESSIVELY** Use this tool to edit an existing file or create a new file. Use truncation placeholders to represent unchanged code: - // ... existing code ... (C/JS/TS-style) - # ... existing code ... (Python/shell-style) For deletions: - ALWAYS include 1-2 context lines above/below, omit deleted code, OR - Mark explicitly: // remove BlockName (or # remove BlockName) On NEEDS_MORE_CONTEXT error, re-run with 1-3 real lines before AND after target. Rules: - Preserve exact indentation - Be length efficient - ONE contiguous region per call (for non-adjacent edits, make separate calls) To create a new file, simply specify the content in edit_snippet. """ # Resolve base_dir dynamically (aligns with other tools). # This allows relative paths when RELACE_BASE_DIR is not set but MCP Roots are available, # and provides a consistent security boundary for absolute paths. base_dir, _ = await resolve_base_dir(config.base_dir, ctx) return await apply_file_logic( backend=apply_backend, file_path=path, edit_snippet=edit_snippet, instruction=instruction or None, # Convert empty string to None internally base_dir=base_dir, ) # Fast Agentic Search search_client = SearchLLMClient(config) @mcp.tool async def fast_search(query: str, ctx: Context) -> dict[str, Any]: """Run Fast Agentic Search over the configured base_dir. Use this tool to quickly explore and understand the codebase. The search agent will examine files, search for patterns, and report back with relevant files and line ranges for the given query. Queries can be natural language (e.g., "find where auth is handled") or precise patterns. The agent will autonomously use grep, ls, and file_view tools to investigate. This is useful before using fast_apply to understand which files need to be modified and how they relate to each other. """ # Resolve base_dir dynamically from MCP Roots if not configured base_dir, source = await resolve_base_dir(config.base_dir, ctx) effective_config = replace(config, base_dir=base_dir) # Avoid shared mutable state across concurrent calls. return FastAgenticSearchHarness(effective_config, search_client).run(query=query) # Cloud Repos (Semantic Search & Sync) repo_client = RelaceRepoClient(config) @mcp.tool async def cloud_sync( force: bool = False, mirror: bool = False, ctx: Context | None = None ) -> dict[str, Any]: """Upload codebase to Relace Repos for cloud_search semantic indexing. Call this ONCE per session before using cloud_search, or after significant code changes. Incremental sync is fast (only changed files). Sync Modes: - Incremental (default): only uploads new/modified files, deletes removed files - Safe Full: triggered by force=True OR first sync (no cached state) OR git HEAD changed (e.g., branch switch, rebase, commit amend). Uploads all files; suppresses delete operations UNLESS HEAD changed, in which case zombie files from the old ref are deleted to prevent stale results. - Mirror Full (force=True, mirror=True): completely overwrites cloud to match local Args: force: If True, force full sync (ignore cached state). mirror: If True (with force=True), use Mirror Full mode to completely overwrite cloud repo (removes files not in local). """ base_dir, _ = await resolve_base_dir(config.base_dir, ctx) return cloud_sync_logic(repo_client, base_dir, force=force, mirror=mirror) @mcp.tool async def cloud_search( query: str, branch: str = "", score_threshold: float = 0.3, token_limit: int = 30000, ctx: Context | None = None, ) -> dict[str, Any]: """Semantic code search using Relace Cloud two-stage retrieval. Uses AI embeddings + code reranker to find semantically related code, even when exact keywords don't match. Run cloud_sync once first. Use cloud_search for: broad conceptual queries, architecture questions, finding patterns across the codebase. Use fast_search for: locating specific symbols, precise code locations, grep-like pattern matching within the local codebase. Args: query: Natural language search query. branch: Branch to search (empty string uses API default branch). score_threshold: Minimum relevance score (0.0-1.0, default 0.3). token_limit: Maximum tokens to return (default 30000). """ # Resolve base_dir dynamically from MCP Roots if not configured base_dir, _ = await resolve_base_dir(config.base_dir, ctx) return cloud_search_logic( repo_client, base_dir, query, branch=branch, score_threshold=score_threshold, token_limit=token_limit, ) @mcp.tool async def cloud_clear(confirm: bool = False, ctx: Context | None = None) -> dict[str, Any]: """Delete the cloud repository and local sync state. Use when: switching to a different project, resetting after major codebase restructuring, or cleaning up unused cloud repositories. WARNING: This action is IRREVERSIBLE. It permanently deletes the remote repository and removes the local sync state file. Args: confirm: Must be True to proceed. Acts as a safety guard. """ from .repo.clear import cloud_clear_logic base_dir, _ = await resolve_base_dir(config.base_dir, ctx) return cloud_clear_logic(repo_client, base_dir, confirm=confirm) @mcp.tool def cloud_list() -> dict[str, Any]: """List all repositories in your Relace Cloud account. Use to: discover synced repositories, verify cloud_sync results, or identify repository IDs for debugging. Returns a list of repos with: repo_id, name, auto_index status. Auto-paginates up to 10,000 repos (safety limit); `has_more=True` indicates the limit was reached. """ return cloud_list_logic(repo_client) @mcp.tool async def cloud_info(ctx: Context | None = None) -> dict[str, Any]: """Get detailed sync status for the current repository. Use before cloud_sync to understand what action is needed. Returns: - local: Current git branch and HEAD commit - synced: Last sync state (git ref, tracked files count) - cloud: Cloud repo info (if exists) - status: Whether sync is needed and recommended action """ base_dir, _ = await resolve_base_dir(config.base_dir, ctx) return cloud_info_logic(repo_client, base_dir) # === MCP Resources === @mcp.resource("relace://tool_list", mime_type="application/json") def tool_list() -> list[dict[str, Any]]: """List all available tools with their status.""" return [ { "id": "fast_apply", "name": "Fast Apply", "description": "Edit or create files using fuzzy matching", "enabled": True, }, { "id": "fast_search", "name": "Fast Search", "description": "Agentic search over local codebase", "enabled": True, }, { "id": "cloud_sync", "name": "Cloud Sync", "description": "Upload codebase for semantic indexing", "enabled": True, }, { "id": "cloud_search", "name": "Cloud Search", "description": "Semantic code search using AI embeddings", "enabled": True, }, { "id": "cloud_clear", "name": "Cloud Clear", "description": "Delete cloud repository and sync state", "enabled": True, }, { "id": "cloud_list", "name": "Cloud List", "description": "List all repositories in Relace Cloud", "enabled": True, }, { "id": "cloud_info", "name": "Cloud Info", "description": "Get sync status for current repository", "enabled": True, }, ]

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/possible055/relace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server