Skip to main content
Glama
wrale

mcp-server-tree-sitter

by wrale

get_dependencies

Analyze and retrieve dependencies of a file within a specified project, identifying imports or includes for better code context understanding in tree-sitter-based code analysis.

Instructions

Find dependencies of a file.

    Args:
        project: Project name
        file_path: Path to the file

    Returns:
        Dictionary of imports/includes
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes
projectYes

Implementation Reference

  • Registration of the 'get_dependencies' MCP tool using @mcp_server.tool() decorator. This is the entry point handler that receives tool arguments and delegates to the analysis module.
    def get_dependencies(project: str, file_path: str) -> Dict[str, List[str]]:
        """Find dependencies of a file.
    
        Args:
            project: Project name
            file_path: Path to the file
    
        Returns:
            Dictionary of imports/includes
        """
        from ..tools.analysis import find_dependencies
    
        return find_dependencies(
            project_registry.get_project(project),
            file_path,
            language_registry,
        )
  • Handler function for the 'get_dependencies' tool, which extracts dependencies from a file using Tree-sitter queries via the analysis module.
    def get_dependencies(project: str, file_path: str) -> Dict[str, List[str]]:
        """Find dependencies of a file.
    
        Args:
            project: Project name
            file_path: Path to the file
    
        Returns:
            Dictionary of imports/includes
        """
        from ..tools.analysis import find_dependencies
    
        return find_dependencies(
            project_registry.get_project(project),
            file_path,
            language_registry,
        )
  • Core implementation of dependency extraction using Tree-sitter queries to find import/include statements in the specified file, categorizing and deduplicating them across supported languages.
    def find_dependencies(
        project: Any,
        file_path: str,
        language_registry: Any,
    ) -> Dict[str, List[str]]:
        """
        Find dependencies of a file.
    
        Args:
            project: Project object
            file_path: Path to the file relative to project root
            language_registry: Language registry object
    
        Returns:
            Dictionary of dependencies (imports, includes, etc.)
        """
        abs_path = project.get_file_path(file_path)
    
        try:
            validate_file_access(abs_path, project.root_path)
        except SecurityError as e:
            raise SecurityError(f"Access denied: {e}") from e
    
        language = language_registry.language_for_file(file_path)
        if not language:
            raise ValueError(f"Could not detect language for {file_path}")
    
        # Get the appropriate query for imports
        query_string = get_query_template(language, "imports")
        if not query_string:
            raise ValueError(f"Import query not available for {language}")
    
        # Parse file and extract imports
        try:
            # Get language object
            language_obj = language_registry.get_language(language)
            safe_lang = ensure_language(language_obj)
    
            # Parse with cached tree
            tree, source_bytes = parse_with_cached_tree(abs_path, language, safe_lang)
    
            # Execute query
            query = safe_lang.query(query_string)
            matches = query.captures(tree.root_node)
    
            # Organize imports by type
            imports: Dict[str, List[str]] = defaultdict(list)
            # Track additional import information to handle aliased imports
            module_imports: Set[str] = set()
    
            # Helper function to process an import node
            def process_import_node(node: Any, capture_name: str) -> None:
                try:
                    safe_node = ensure_node(node)
                    text = get_node_text(safe_node, source_bytes)
    
                    # Determine the import category
                    if capture_name.startswith("import."):
                        category = capture_name.split(".", 1)[1]
                    else:
                        category = "import"
    
                    # Ensure we're adding a string to the list
                    text_str = text.decode("utf-8") if isinstance(text, bytes) else text
                    imports[category].append(text_str)
    
                    # Add to module_imports for tracking all imported modules
                    if category == "from":
                        # Handle 'from X import Y' cases
                        parts = text_str.split()
    
                        if parts:
                            module_part = parts[0].strip()
                            module_imports.add(module_part)
                    elif category == "module":
                        # Handle 'import X' cases
                        text_str = text_str.strip()
                        module_imports.add(text_str)
                    elif category == "alias":
                        # Handle explicitly captured aliases from 'from X import Y as Z' cases
                        # The module itself will be captured separately via the 'from' capture
                        pass
                    elif category == "item" and text:
                        # For individual imported items, make sure to add the module name if it exists
                        if hasattr(safe_node, "parent") and safe_node.parent:
                            parent_node = safe_node.parent  # The import_from_statement node
                            # Find the module_name node
                            for child in parent_node.children:
                                if (
                                    hasattr(child, "type")
                                    and child.type == "dotted_name"
                                    and child != safe_node
                                    and hasattr(child, "text")
                                ):
                                    module_name_text = get_node_text(child, source_bytes)
                                    module_name_str = (
                                        module_name_text.decode("utf-8")
                                        if isinstance(module_name_text, bytes)
                                        else module_name_text
                                    )
                                    module_imports.add(module_name_str)
                                    break
                    elif "import" in text_str:
                        # Fallback for raw import statements
                        parts = text_str.split()
                        if len(parts) > 1 and parts[0] == "from":
                            # Handle 'from datetime import datetime as dt' case
                            part = parts[1].strip()
                            module_imports.add(str(part))
                        elif "from" in text_str and "import" in text_str:
                            # Another way to handle 'from X import Y' patterns
                            # text_str is already properly decoded
    
                            from_parts = text_str.split("from", 1)[1].split("import", 1)
                            if len(from_parts) > 0:
                                module_name = from_parts[0].strip()
                                module_imports.add(module_name)
                        elif parts[0] == "import":
                            for module in " ".join(parts[1:]).split(","):
                                module = module.strip().split(" as ")[0].strip()
                                module_imports.add(module)
                except Exception:
                    # Skip problematic nodes
                    pass
    
            # Handle different return formats from query.captures()
            if isinstance(matches, dict):
                # Dictionary format: {capture_name: [node1, node2, ...], ...}
                for capture_name, nodes in matches.items():
                    for node in nodes:
                        process_import_node(node, capture_name)
            else:
                # List format: [(node1, capture_name1), (node2, capture_name2), ...]
                for match in matches:
                    # Handle different return types from query.captures()
                    if isinstance(match, tuple) and len(match) == 2:
                        # Direct tuple unpacking
                        node, capture_name = match
                    elif hasattr(match, "node") and hasattr(match, "capture_name"):
                        # Object with node and capture_name attributes
                        node, capture_name = match.node, match.capture_name
                    elif isinstance(match, dict) and "node" in match and "capture" in match:
                        # Dictionary with node and capture keys
                        node, capture_name = match["node"], match["capture"]
                    else:
                        # Skip if format is unknown
                        continue
    
                    process_import_node(node, capture_name)
    
            # Add all detected modules to the result
            if module_imports:
                # Convert module_imports Set[str] to List[str]
                module_list = list(module_imports)
                imports["module"] = list(set(imports.get("module", []) + module_list))
    
            # For Python, specifically check for aliased imports
            if language == "python":
                # Look for aliased imports directly
                aliased_query_string = "(aliased_import) @alias"
                aliased_query = safe_lang.query(aliased_query_string)
                aliased_matches = aliased_query.captures(tree.root_node)
    
                # Process aliased imports
                for match in aliased_matches:
                    # Initialize variables
                    aliased_node: Optional[Any] = None
                    # We're not using aliased_capture_name but need to unpack it
                    _: str = ""
    
                    # Handle different return types
                    if isinstance(match, tuple) and len(match) == 2:
                        aliased_node, _ = match
                    elif hasattr(match, "node") and hasattr(match, "capture_name"):
                        aliased_node, _ = match.node, match.capture_name
                    elif isinstance(match, dict) and "node" in match and "capture" in match:
                        aliased_node, _ = match["node"], match["capture"]
                    else:
                        continue
    
                    # Extract module name from parent
                    if aliased_node is not None and aliased_node.parent and aliased_node.parent.parent:
                        for child in aliased_node.parent.parent.children:
                            if hasattr(child, "type") and child.type == "dotted_name":
                                module_name_text = get_node_text(child, source_bytes)
                                if module_name_text:
                                    module_name_str = (
                                        module_name_text.decode("utf-8")
                                        if isinstance(module_name_text, bytes)
                                        else module_name_text
                                    )
                                    module_imports.add(module_name_str)
                                break
    
                # Update the module list with any new module imports
                if module_imports:
                    module_list = list(module_imports)
                    imports["module"] = list(set(imports.get("module", []) + module_list))
    
            return dict(imports)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wrale/mcp-server-tree-sitter'

If you have feedback or need assistance with the MCP directory API, please join our Discord server