Skip to main content
Glama
wrale

mcp-server-tree-sitter

by wrale

analyze_complexity

Measure and evaluate code complexity by analyzing specific files within a project. Outputs detailed complexity metrics to enhance codebase understanding and maintainability.

Instructions

Analyze code complexity.

    Args:
        project: Project name
        file_path: Path to the file

    Returns:
        Complexity metrics
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes
projectYes

Implementation Reference

  • MCP tool handler for 'analyze_complexity'. This decorated function is called when the tool is invoked, delegating to the core analysis function.
    @mcp_server.tool()
    def analyze_complexity(project: str, file_path: str) -> Dict[str, Any]:
        """Analyze code complexity.
    
        Args:
            project: Project name
            file_path: Path to the file
    
        Returns:
            Complexity metrics
        """
        from ..tools.analysis import analyze_code_complexity
    
        return analyze_code_complexity(
            project_registry.get_project(project),
            file_path,
            language_registry,
        )
  • Registration of the 'analyze_complexity' tool via @mcp_server.tool() decorator in the central register_tools function.
    @mcp_server.tool()
    def analyze_complexity(project: str, file_path: str) -> Dict[str, Any]:
        """Analyze code complexity.
    
        Args:
            project: Project name
            file_path: Path to the file
    
        Returns:
            Complexity metrics
        """
        from ..tools.analysis import analyze_code_complexity
    
        return analyze_code_complexity(
            project_registry.get_project(project),
            file_path,
            language_registry,
        )
  • Core implementation of code complexity analysis, calculating metrics like line counts, comments, functions, classes, and cyclomatic complexity using Tree-sitter AST.
    def analyze_code_complexity(
        project: Any,
        file_path: str,
        language_registry: Any,
    ) -> Dict[str, Any]:
        """
        Analyze code complexity.
    
        Args:
            project: Project object
            file_path: Path to the file relative to project root
            language_registry: Language registry object
    
        Returns:
            Complexity metrics
        """
        abs_path = project.get_file_path(file_path)
    
        try:
            validate_file_access(abs_path, project.root_path)
        except SecurityError as e:
            raise SecurityError(f"Access denied: {e}") from e
    
        language = language_registry.language_for_file(file_path)
        if not language:
            raise ValueError(f"Could not detect language for {file_path}")
    
        # Parse file
        try:
            # Get language object
            language_obj = language_registry.get_language(language)
            safe_lang = ensure_language(language_obj)
    
            # Parse with cached tree
            tree, source_bytes = parse_with_cached_tree(abs_path, language, safe_lang)
    
            # Calculate basic metrics
            # Read lines from file using utility
            lines = read_text_file(abs_path)
    
            line_count = len(lines)
            empty_lines = sum(1 for line in lines if line.strip() == "")
            comment_lines = 0
    
            # Language-specific comment detection using utility
            comment_prefix = get_comment_prefix(language)
            if comment_prefix:
                # Count comments for text lines
                comment_lines = sum(1 for line in lines if line.strip().startswith(comment_prefix))
    
            # Get function and class definitions, excluding methods from count
            symbols = extract_symbols(
                project,
                file_path,
                language_registry,
                ["functions", "classes"],
                exclude_class_methods=True,
            )
            function_count = len(symbols.get("functions", []))
            class_count = len(symbols.get("classes", []))
    
            # Calculate cyclomatic complexity using AST
            complexity_nodes = {
                "python": [
                    "if_statement",
                    "for_statement",
                    "while_statement",
                    "try_statement",
                ],
                "javascript": [
                    "if_statement",
                    "for_statement",
                    "while_statement",
                    "try_statement",
                ],
                "typescript": [
                    "if_statement",
                    "for_statement",
                    "while_statement",
                    "try_statement",
                ],
                # Add more languages...
            }
    
            cyclomatic_complexity = 1  # Base complexity
    
            if language in complexity_nodes:
                # Count decision points
                decision_types = complexity_nodes[language]
    
                def count_nodes(node: Any, types: List[str]) -> int:
                    safe_node = ensure_node(node)
                    count = 0
                    if safe_node.type in types:
                        count += 1
    
                    for child in safe_node.children:
                        count += count_nodes(child, types)
    
                    return count
    
                cyclomatic_complexity += count_nodes(tree.root_node, decision_types)
    
            # Calculate maintainability metrics
            code_lines = line_count - empty_lines - comment_lines
            comment_ratio = comment_lines / line_count if line_count > 0 else 0
    
            # Estimate average function length
            avg_func_lines = float(code_lines / function_count if function_count > 0 else code_lines)
    
            return {
                "line_count": line_count,
                "code_lines": code_lines,
                "empty_lines": empty_lines,
                "comment_lines": comment_lines,
                "comment_ratio": comment_ratio,
                "function_count": function_count,
                "class_count": class_count,
                "avg_function_lines": round(avg_func_lines, 2),
                "cyclomatic_complexity": cyclomatic_complexity,
                "language": language,
            }
    
        except Exception as e:
            raise ValueError(f"Error analyzing complexity in {file_path}: {e}") from e
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions 'Returns: Complexity metrics' but doesn't disclose behavioral traits like whether this is a read-only operation, computational cost, rate limits, or what happens with invalid inputs. The description is minimal and lacks essential operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the main purpose. The Args/Returns structure is clear, though the content within is sparse. No redundant sentences are present, making it efficient but under-specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (code analysis with 2 parameters), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what 'complexity metrics' entail, how they're computed, or provide enough context for reliable agent use, falling short of minimum viability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists parameters 'project' and 'file_path' with brief labels but adds minimal meaning beyond the schema's titles. No details on format, constraints, or examples are provided, leaving significant gaps in understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool's purpose as 'Analyze code complexity', which is clear but vague. It specifies the action ('analyze') and resource ('code complexity'), but doesn't distinguish it from potential siblings like 'analyze_project' or provide specific details about what complexity analysis entails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With siblings like 'analyze_project' and 'get_ast' that might relate to code analysis, the description lacks any context about appropriate use cases, prerequisites, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wrale/mcp-server-tree-sitter'

If you have feedback or need assistance with the MCP directory API, please join our Discord server