Skip to main content
Glama
safurrier

MCP Filesystem Server

find_duplicate_files

Identify duplicate files by comparing sizes and contents within a directory. Specify a starting path, search recursively, set size filters, and output results in text or JSON format.

Instructions

Find duplicate files by comparing file sizes and contents.

Args:
    path: Starting directory
    recursive: Whether to search subdirectories
    min_size: Minimum file size to consider (bytes)
    exclude_patterns: Optional patterns to exclude
    max_files: Maximum number of files to scan
    format: Output format ('text' or 'json')
    ctx: MCP context

Returns:
    Duplicate file information

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYes
recursiveNo
min_sizeNo
exclude_patternsNo
max_filesNo
formatNotext

Implementation Reference

  • Core handler implementing the duplicate detection: validates path, scans directory recursively, groups files by size, computes MD5 hashes for potential duplicates, returns dict of hash to file paths.
    async def find_duplicate_files(
        self,
        root_path: Union[str, Path],
        recursive: bool = True,
        min_size: int = 1,
        exclude_patterns: Optional[List[str]] = None,
        max_files: int = 1000,
    ) -> Dict[str, List[str]]:
        """Find duplicate files by comparing file sizes and contents.
    
        Args:
            root_path: Starting directory
            recursive: Whether to search subdirectories
            min_size: Minimum file size to consider (bytes)
            exclude_patterns: Optional patterns to exclude
            max_files: Maximum number of files to scan
    
        Returns:
            Dictionary mapping file hash to list of identical files
    
        Raises:
            ValueError: If root_path is outside allowed directories
        """
        import hashlib
    
        abs_path, allowed = await self.validator.validate_path(root_path)
        if not allowed:
            raise ValueError(f"Path outside allowed directories: {root_path}")
    
        if not abs_path.is_dir():
            raise ValueError(f"Not a directory: {root_path}")
    
        # Compile exclude patterns if provided
        exclude_regexes = []
        if exclude_patterns:
            for exclude in exclude_patterns:
                try:
                    exclude_regexes.append(re.compile(exclude))
                except re.error:
                    logger.warning(f"Invalid exclude pattern: {exclude}")
    
        # First, group files by size
        size_groups: Dict[int, List[Path]] = {}
        files_processed = 0
    
        async def scan_for_sizes(dir_path: Path) -> None:
            nonlocal files_processed
    
            if files_processed >= max_files:
                return
    
            try:
                entries = await anyio.to_thread.run_sync(list, dir_path.iterdir())
    
                for entry in entries:
                    if files_processed >= max_files:
                        return
    
                    # Skip if matched by exclude pattern
                    path_str = str(entry)
                    excluded = False
                    for exclude_re in exclude_regexes:
                        if exclude_re.search(path_str):
                            excluded = True
                            break
    
                    if excluded:
                        continue
    
                    try:
                        if entry.is_file():
                            size = entry.stat().st_size
                            if size >= min_size:
                                if size not in size_groups:
                                    size_groups[size] = []
                                size_groups[size].append(entry)
                                files_processed += 1
    
                        elif entry.is_dir() and recursive:
                            # Check if this path is still allowed
                            (
                                entry_abs,
                                entry_allowed,
                            ) = await self.validator.validate_path(entry)
                            if entry_allowed:
                                await scan_for_sizes(entry)
    
                    except (PermissionError, FileNotFoundError):
                        # Skip entries we can't access
                        pass
    
            except (PermissionError, FileNotFoundError):
                # Skip directories we can't access
                pass
    
        await scan_for_sizes(abs_path)
    
        # Now, for each size group with multiple files, compute and compare hashes
        duplicates: Dict[str, List[str]] = {}
    
        for size, files in size_groups.items():
            if len(files) < 2:
                continue
    
            # Group files by hash
            hash_groups: Dict[str, List[Path]] = {}
    
            for file_path in files:
                try:
                    # Compute file hash
                    file_bytes = await anyio.to_thread.run_sync(file_path.read_bytes)
                    file_hash = hashlib.md5(file_bytes).hexdigest()
    
                    if file_hash not in hash_groups:
                        hash_groups[file_hash] = []
                    hash_groups[file_hash].append(file_path)
    
                except (PermissionError, FileNotFoundError):
                    # Skip files we can't access
                    pass
    
            # Add duplicate groups to results
            for file_hash, hash_files in hash_groups.items():
                if len(hash_files) >= 2:
                    duplicates[file_hash] = [str(f) for f in hash_files]
    
        return duplicates
  • MCP tool registration via @mcp.tool() decorator. Wrapper function that delegates to Advanced component's handler and formats results for MCP response (text or JSON). Defines tool schema via parameters and docstring.
    @mcp.tool()
    async def find_duplicate_files(
        path: str,
        ctx: Context,
        recursive: bool = True,
        min_size: int = 1,
        exclude_patterns: Optional[List[str]] = None,
        max_files: int = 1000,
        format: str = "text",
    ) -> str:
        """Find duplicate files by comparing file sizes and contents.
    
        Args:
            path: Starting directory
            recursive: Whether to search subdirectories
            min_size: Minimum file size to consider (bytes)
            exclude_patterns: Optional patterns to exclude
            max_files: Maximum number of files to scan
            format: Output format ('text' or 'json')
            ctx: MCP context
    
        Returns:
            Duplicate file information
        """
        try:
            components = get_components()
            duplicates = await components["advanced"].find_duplicate_files(
                path, recursive, min_size, exclude_patterns, max_files
            )
    
            if format.lower() == "json":
                return json.dumps(duplicates, indent=2)
    
            # Format as text
            if not duplicates:
                return "No duplicate files found"
    
            lines = []
            for file_hash, files in duplicates.items():
                lines.append(f"Hash: {file_hash}")
                for file_path in files:
                    lines.append(f"  {file_path}")
                lines.append("")
    
            return f"Found {len(duplicates)} sets of duplicate files:\n\n" + "\n".join(
                lines
            )
    
        except Exception as e:
            return f"Error finding duplicate files: {str(e)}"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the comparison method (file sizes and contents) and output format options, it doesn't disclose important behavioral traits like performance characteristics (scanning could be slow), memory usage, whether it follows symlinks, error handling, or what happens when max_files is reached. The description provides basic operational context but lacks comprehensive behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, Args, Returns) and front-loads the core purpose. The Args section is comprehensive but could be more concise - some parameter explanations are brief but effective. Overall efficient with minimal wasted space, though the 'ctx: MCP context' parameter explanation adds little value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, no annotations, and no output schema, the description provides adequate basic information but has gaps. It explains parameters well and mentions output format options, but doesn't describe the structure of returned 'Duplicate file information' or important behavioral considerations. The description is complete enough for basic usage but lacks depth for optimal agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining all 6 parameters in the Args section, providing meaningful context beyond just parameter names. Each parameter gets a brief semantic explanation (e.g., 'Minimum file size to consider (bytes)', 'Optional patterns to exclude'), though some explanations could be more detailed (like what patterns are supported).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find duplicate files') and method ('by comparing file sizes and contents'), distinguishing it from sibling tools like compare_files (which compares specific files) or find_large_files (which finds large files). It provides a complete purpose statement with both what it does and how it works.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the parameter explanations (e.g., 'Starting directory', 'Whether to search subdirectories'), but doesn't explicitly state when to use this tool versus alternatives like compare_files or search_files. No explicit when-not-to-use guidance or sibling tool comparisons are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/safurrier/mcp-filesystem'

If you have feedback or need assistance with the MCP directory API, please join our Discord server