Skip to main content
Glama
SDGLBL
by SDGLBL

content_replace

Replace text patterns across multiple files in a directory. Search for specific text and replace it with new content, with options to preview changes before applying them.

Instructions

Replace a pattern in file contents across multiple files.

Searches for text patterns across all files in the specified directory that match the file pattern and replaces them with the specified text. Can be run in dry-run mode to preview changes without applying them. Only works within allowed directories.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
patternYesText pattern to search for in files
replacementYesText to replace the pattern with (can be empty string)
pathYesPath to file or directory to search in
file_patternNoFile name pattern to match (default: all files)*
dry_runNoIf True, only preview changes without modifying files

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler method that executes the content replacement logic, handling file/directory paths, pattern matching, dry-run mode, and permission checks.
    @override
    async def call(
        self,
        ctx: MCPContext,
        **params: Unpack[ContentReplaceToolParams],
    ) -> str:
        """Execute the tool with the given parameters.
    
        Args:
            ctx: MCP context
            **params: Tool parameters
    
        Returns:
            Tool result
        """
        tool_ctx = self.create_tool_context(ctx)
    
        # Extract parameters
        pattern: Pattern = params["pattern"]
        replacement: Replacement = params["replacement"]
        path: SearchPath = params["path"]
        file_pattern = params.get("file_pattern", "*")  # Default to all files
        dry_run = params.get("dry_run", False)  # Default to False
    
        path_validation = self.validate_path(path)
        if path_validation.is_error:
            await tool_ctx.error(path_validation.error_message)
            return f"Error: {path_validation.error_message}"
    
        # file_pattern and dry_run can be None safely as they have default values
    
        await tool_ctx.info(
            f"Replacing pattern '{pattern}' with '{replacement}' in files matching '{file_pattern}' in {path}"
        )
    
        # Check if path is allowed
        allowed, error_msg = await self.check_path_allowed(path, tool_ctx)
        if not allowed:
            return error_msg
    
        # Additional check already verified by is_path_allowed above
        await tool_ctx.info(
            f"Replacing pattern '{pattern}' with '{replacement}' in files matching '{file_pattern}' in {path}"
        )
    
        try:
            input_path = Path(path)
    
            # Check if path exists
            exists, error_msg = await self.check_path_exists(path, tool_ctx)
            if not exists:
                return error_msg
    
            # Find matching files
            matching_files: list[Path] = []
    
            # Process based on whether path is a file or directory
            if input_path.is_file():
                # Single file search
                if file_pattern == "*" or fnmatch.fnmatch(
                    input_path.name, file_pattern
                ):
                    matching_files.append(input_path)
                    await tool_ctx.info(f"Searching single file: {path}")
                else:
                    await tool_ctx.info(
                        f"File does not match pattern '{file_pattern}': {path}"
                    )
                    return f"File does not match pattern '{file_pattern}': {path}"
            elif input_path.is_dir():
                # Directory search - optimized file finding
                await tool_ctx.info(f"Finding files in directory: {path}")
    
                # Keep track of allowed paths for filtering
                allowed_paths: set[str] = set()
    
                # Collect all allowed paths first for faster filtering
                for entry in input_path.rglob("*"):
                    entry_path = str(entry)
                    if self.is_path_allowed(entry_path):
                        allowed_paths.add(entry_path)
    
                # Find matching files efficiently
                for entry in input_path.rglob("*"):
                    entry_path = str(entry)
                    if entry_path in allowed_paths and entry.is_file():
                        if file_pattern == "*" or fnmatch.fnmatch(
                            entry.name, file_pattern
                        ):
                            matching_files.append(entry)
    
                await tool_ctx.info(f"Found {len(matching_files)} matching files")
            else:
                # This shouldn't happen since we already checked for existence
                await tool_ctx.error(f"Path is neither a file nor a directory: {path}")
                return f"Error: Path is neither a file nor a directory: {path}"
    
            # Report progress
            total_files = len(matching_files)
            await tool_ctx.info(f"Processing {total_files} files")
    
            # Process files
            results: list[str] = []
            files_modified = 0
            replacements_made = 0
    
            for i, file_path in enumerate(matching_files):
                # Report progress every 10 files
                if i % 10 == 0:
                    await tool_ctx.report_progress(i, total_files)
    
                try:
                    # Read file
                    with open(file_path, "r", encoding="utf-8") as f:
                        content = f.read()
    
                    # Count occurrences
                    count = content.count(pattern)
    
                    if count > 0:
                        # Replace pattern
                        new_content = content.replace(pattern, replacement)
    
                        # Add to results
                        replacements_made += count
                        files_modified += 1
                        results.append(f"{file_path}: {count} replacements")
    
                        # Write file if not a dry run
                        if not dry_run:
                            with open(file_path, "w", encoding="utf-8") as f:
                                f.write(new_content)
    
                except UnicodeDecodeError:
                    # Skip binary files
                    continue
                except Exception as e:
                    await tool_ctx.warning(f"Error processing {file_path}: {str(e)}")
    
            # Final progress report
            await tool_ctx.report_progress(total_files, total_files)
    
            if replacements_made == 0:
                return f"No occurrences of pattern '{pattern}' found in files matching '{file_pattern}' in {path}"
    
            if dry_run:
                await tool_ctx.info(
                    f"Dry run: {replacements_made} replacements would be made in {files_modified} files"
                )
                message = f"Dry run: {replacements_made} replacements of '{pattern}' with '{replacement}' would be made in {files_modified} files:"
            else:
                await tool_ctx.info(
                    f"Made {replacements_made} replacements in {files_modified} files"
                )
                message = f"Made {replacements_made} replacements of '{pattern}' with '{replacement}' in {files_modified} files:"
    
            return message + "\n\n" + "\n".join(results)
        except Exception as e:
            await tool_ctx.error(f"Error replacing content: {str(e)}")
            return f"Error replacing content: {str(e)}"
  • Input schema definition using TypedDict and Annotated types with Pydantic Fields for parameter validation and descriptions.
    class ContentReplaceToolParams(TypedDict):
        """Parameters for the ContentReplaceTool.
    
        Attributes:
            pattern: Text pattern to search for in files
            replacement: Text to replace the pattern with (can be empty string)
            path: Path to file or directory to search in
            file_pattern: File name pattern to match (default: all files)
            dry_run: If True, only preview changes without modifying files
        """
    
        pattern: Pattern
        replacement: Replacement
        path: SearchPath
        file_pattern: FilePattern
        dry_run: DryRun
  • Tool registration method that defines the MCP tool wrapper function with @mcp_server.tool decorator and parameter types.
    @override
    def register(self, mcp_server: FastMCP) -> None:
        """Register this content replace tool with the MCP server.
    
        Creates a wrapper function with explicitly defined parameters that match
        the tool's parameter schema and registers it with the MCP server.
    
        Args:
            mcp_server: The FastMCP server instance
        """
        tool_self = self  # Create a reference to self for use in the closure
    
        @mcp_server.tool(name=self.name, description=self.description)
        async def content_replace(
            ctx: MCPContext,
            pattern: Pattern,
            replacement: Replacement,
            path: SearchPath,
            file_pattern: FilePattern,
            dry_run: DryRun,
        ) -> str:
            ctx = get_context()
            return await tool_self.call(
                ctx,
                pattern=pattern,
                replacement=replacement,
                path=path,
                file_pattern=file_pattern,
                dry_run=dry_run,
            )
  • Registers all filesystem tools, including ContentReplaceTool, by instantiating them and calling ToolRegistry.register_tools.
    def register_filesystem_tools(
        mcp_server: FastMCP,
        permission_manager: PermissionManager,
    ) -> list[BaseTool]:
        """Register all filesystem tools with the MCP server.
    
        Args:
            mcp_server: The FastMCP server instance
            permission_manager: Permission manager for access control
    
        Returns:
            List of registered tools
        """
        tools = get_filesystem_tools(permission_manager)
        ToolRegistry.register_tools(mcp_server, tools)
        return tools
  • get_filesystem_tools function that instantiates ContentReplaceTool with permission_manager.
    return [
        ReadTool(permission_manager),
        Write(permission_manager),
        Edit(permission_manager),
        MultiEdit(permission_manager),
        DirectoryTreeTool(permission_manager),
        Grep(permission_manager),
        ContentReplaceTool(permission_manager),
        GrepAstTool(permission_manager),
    ]
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it mentions the dry-run mode for previewing changes, the multi-file scope, and the directory restriction. It doesn't cover potential side effects like backup behavior or error handling, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in four sentences, each adding distinct value: purpose, scope, dry-run feature, and constraint. There's no redundant information, and it's front-loaded with the core functionality, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (file operations with pattern replacement), no annotations, and the presence of an output schema (which handles return values), the description is largely complete. It covers the what, how, and constraints, though it could benefit from mentioning authentication or rate limits if applicable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema—it implies 'pattern' and 'replacement' usage and mentions 'file_pattern' matching, but doesn't provide additional context like regex support or path validation rules that aren't in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Replace a pattern in file contents across multiple files') and distinguishes it from siblings like 'grep' (search only) and 'edit' (single file editing). It specifies the scope ('across multiple files') and resource ('file contents'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Searches for text patterns across all files...') and mentions constraints ('Only works within allowed directories'). However, it doesn't explicitly state when NOT to use it or name specific alternatives among siblings like 'multi_edit' or 'write' for comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SDGLBL/mcp-claude-code'

If you have feedback or need assistance with the MCP directory API, please join our Discord server