Skip to main content
Glama

analyze_codebase

Scan codebases to generate architecture diagrams by detecting services, dependencies, and connections from configuration files and code analysis.

Instructions

Scan a local codebase directory and automatically generate an architecture diagram. Detects services from package.json, docker-compose, pom.xml, go.mod, and Python configs. Infers connections from dependencies, imports, env vars, and docker-compose depends_on. Detects databases, queues, external services, and API gateways automatically. After analysis, creates the architecture and returns the URL. Also runs lint rules and reports quality issues.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYesAbsolute path to the codebase root directory to scan
nameNoArchitecture name. If omitted, inferred from directory name.
descriptionNoBrief description of the system
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes key behaviors: scanning directories, detecting services from specific config files, inferring connections, detecting infrastructure components, creating architectures, and running lint rules. However, it lacks details on permissions needed, whether it modifies the codebase (e.g., 'creates the architecture' might imply writing files or just generating a diagram), error handling, or performance characteristics like timeouts. The description adds value but leaves gaps for a complex analysis tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized at 6 sentences, each adding distinct value: the core action, detection sources, inference methods, component detection, output generation, and additional linting. It's front-loaded with the main purpose. While efficient, minor trimming could be possible (e.g., combining some detection details), but overall it avoids redundancy and wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (automated codebase analysis with multiple detection methods) and no annotations or output schema, the description provides a good overview of what it does but has gaps. It covers the analysis process and output ('returns the URL'), but doesn't detail the format of the architecture diagram, error conditions, or what 'quality issues' entail. For a tool with rich functionality and no structured output documentation, more completeness would be beneficial to guide the agent fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters ('path', 'name', 'description') with clear descriptions. The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't clarify format constraints for 'path' or examples for 'name'). The baseline score of 3 is appropriate since the schema does the heavy lifting, and the description focuses on overall tool behavior rather than parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('scan', 'generate', 'detects', 'infers', 'creates', 'runs', 'reports') and resources ('local codebase directory', 'architecture diagram', 'services', 'connections', 'databases', 'queues', 'external services', 'API gateways', 'quality issues'). It distinguishes itself from siblings like 'create_architecture' by specifying it's an automated analysis tool that scans codebases rather than manually creating architectures.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through phrases like 'scan a local codebase directory' and 'automatically generate an architecture diagram', suggesting this tool is for initial analysis of existing codebases. However, it doesn't explicitly state when to use this vs. alternatives like 'create_architecture' (which might be for manual creation) or 'get_architecture' (which might retrieve existing ones). No explicit exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rdanieli/tentra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server