Skip to main content
Glama

get_file_summary

Generate a concise summary of code files including line count, function definitions, imports, and complexity metrics to analyze code structure.

Instructions

Get a summary of a specific file, including:
- Line count
- Function/class definitions (for supported languages)
- Import statements
- Basic complexity metrics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool registration, handler, and schema (via signature and docstring) for 'get_file_summary'. Delegates execution to CodeIntelligenceService.analyze_file.
    @mcp.tool()
    @handle_mcp_tool_errors(return_type='dict')
    def get_file_summary(file_path: str, ctx: Context) -> Dict[str, Any]:
        """
        Get a summary of a specific file, including:
        - Line count
        - Function/class definitions (for supported languages)
        - Import statements
        - Basic complexity metrics
        """
        return CodeIntelligenceService(ctx).analyze_file(file_path)
  • Business logic helper: analyze_file method in CodeIntelligenceService, called by handler. Validates input, fetches summary from index_manager.get_file_summary, handles missing index case.
    def analyze_file(self, file_path: str) -> Dict[str, Any]:
        """
        Analyze a file and return comprehensive intelligence.
    
        This is the main business method that orchestrates the file analysis
        workflow, choosing the best analysis strategy and providing rich
        insights about the code.
    
        Args:
            file_path: Path to the file to analyze (relative to project root)
    
        Returns:
            Dictionary with comprehensive file analysis
    
        Raises:
            ValueError: If file path is invalid or analysis fails
        """
        # Business validation
        self._validate_analysis_request(file_path)
    
        # Use the global index manager
        index_manager = get_index_manager()
        
        # Debug logging
        logger.info(f"Getting file summary for: {file_path}")
        logger.info(f"Index manager state - Project path: {index_manager.project_path}")
        logger.info(f"Index manager state - Has builder: {index_manager.index_builder is not None}")
        if index_manager.index_builder:
            logger.info(f"Index manager state - Has index: {index_manager.index_builder.in_memory_index is not None}")
        
        # Get file summary from JSON index
        summary = index_manager.get_file_summary(file_path)
        logger.info(f"Summary result: {summary is not None}")
    
        # If deep index isn't available yet, return a helpful hint instead of error
        if not summary:
            return {
                "status": "needs_deep_index",
                "message": "Deep index not available. Please run build_deep_index before calling get_file_summary.",
                "file_path": file_path
            }
    
        return summary
  • Core implementation helper: get_file_summary in SQLiteIndexManager. Queries DB for file metadata and symbols, categorizes into functions/classes/methods, constructs full summary dict.
    def get_file_summary(self, file_path: str) -> Optional[Dict[str, Any]]:
        """Return summary information for a file from SQLite storage."""
        with self._lock:
            if not isinstance(file_path, str):
                logger.error("File path must be a string, got %s", type(file_path))
                return None
            if not self.store or not self._is_loaded:
                if not self.load_index():
                    return None
    
            normalized = _normalize_path(file_path)
            with self.store.connect() as conn:
                row = conn.execute(
                    """
                    SELECT id, language, line_count, imports, exports, docstring
                    FROM files WHERE path = ?
                    """,
                    (normalized,),
                ).fetchone()
    
                if not row:
                    logger.warning("File not found in index: %s", normalized)
                    return None
    
                symbol_rows = conn.execute(
                    """
                    SELECT type, line, signature, docstring, called_by, short_name
                    FROM symbols
                    WHERE file_id = ?
                    ORDER BY line ASC
                    """,
                    (row["id"],),
                ).fetchall()
    
            imports = _safe_json_loads(row["imports"])
            exports = _safe_json_loads(row["exports"])
    
            categorized = _categorize_symbols(symbol_rows)
    
            return {
                "file_path": normalized,
                "language": row["language"],
                "line_count": row["line_count"],
                "symbol_count": len(symbol_rows),
                "functions": categorized["functions"],
                "classes": categorized["classes"],
                "methods": categorized["methods"],
                "imports": imports,
                "exports": exports,
                "docstring": row["docstring"],
            }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It describes what information is returned but lacks critical details: it doesn't specify error handling (e.g., for non-existent files or unsupported formats), performance characteristics, or whether this is a read-only operation (implied but not stated). The description adds value by listing summary components but misses key behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, then efficiently lists key summary components in a bulleted format. Every sentence (and bullet point) earns its place by adding specific value without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, analysis-focused), no annotations, but with an output schema present, the description is partially complete. It covers the purpose and output components well, but gaps remain in usage guidelines and behavioral transparency. The output schema likely handles return values, so the description doesn't need to explain those, but it should address more operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and only one parameter, the description compensates well by implicitly defining the parameter's purpose: 'file_path' is clearly the target for the summary. However, it doesn't add explicit details like format expectations (e.g., absolute vs. relative paths) or constraints, which keeps it from a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('summary of a specific file'), and lists the specific summary components it provides. It distinguishes itself from siblings like 'search_code_advanced' or 'find_files' by focusing on detailed analysis of a single file rather than searching or indexing operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., file must exist, supported file types), nor does it differentiate from potential overlapping tools like 'build_deep_index' or 'refresh_index' that might also provide file insights. Usage context is implied but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/johnhuang316/code-index-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server