Skip to main content
Glama

review

Analyze project architecture from multiple expert perspectives to identify actionable improvements and validate technical decisions.

Instructions

Multi-perspective architecture review. Analyzes full project context and produces actionable findings from expert viewpoints. Use focus parameter to zoom in on a specific feature, page, or decision. Use perspective_group to select a predefined group (technical/business/founder) instead of listing individual perspectives.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
perspectivesNoWhich perspectives to include. Defaults to all nine. Overrides perspective_group if both are provided.
perspective_groupNoSelect a predefined group: technical (cto+security+devops), business (product+customer+strategy), founder (investor+unicorn_founder+solo_entrepreneur). Ignored if perspectives is specified.
focusNoSpecific feature, page, module, or decision to focus the review on. E.g. "user login page", "payment integration", "should I use Supabase or Firebase", "what can I delete or simplify". The full project is still scanned for context, but findings focus on this area.
customer_roleNoDescription of target customer for the customer perspective. E.g. "a startup CTO evaluating CI tools"
project_pathNoPath to the project to review. Defaults to current working directory.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It describes the tool's core behavior (analyzing project context, producing findings from expert viewpoints) and mentions that 'the full project is still scanned for context', which adds useful operational context. However, it doesn't disclose important behavioral traits like whether this is a read-only analysis or makes changes, what permissions are needed, execution time, or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences that each earn their place: first states the core purpose, second explains the 'focus' parameter, third explains the 'perspective_group' parameter. No wasted words, front-loaded with the main functionality, and appropriately sized for a 5-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex analysis tool with 5 parameters and no annotations or output schema, the description provides adequate but incomplete context. It explains the tool's purpose and two key parameters well, but doesn't cover what the output looks like (findings format), whether it's a read-only analysis or has side effects, or how it interacts with the project path. Given the complexity, more behavioral context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds meaningful context by explaining the purpose of 'focus' ('to zoom in on a specific feature, page, or decision') and 'perspective_group' ('to select a predefined group instead of listing individual perspectives'), which helps the agent understand when and why to use these parameters beyond their technical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'multi-perspective architecture review' that 'analyzes full project context and produces actionable findings from expert viewpoints.' This specifies both the verb (review/analyze/produce) and resource (project architecture), and distinguishes it from siblings like 'coach' or 'idea_score' which have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use parameters (e.g., 'Use focus parameter to zoom in on a specific feature...', 'Use perspective_group to select a predefined group...'), but doesn't explicitly state when to choose this tool over sibling alternatives like 'coach' or 'recommend'. It gives good parameter usage guidance but lacks tool-level comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fantasieleven-code/callout-dev'

If you have feedback or need assistance with the MCP directory API, please join our Discord server