Skip to main content
Glama

troubleshoot_guided_workflow

Analyze software failures, generate test plans, and track troubleshooting sessions with memory integration and ADR suggestions.

Instructions

Structured failure analysis and test plan generation with memory integration for troubleshooting session tracking and intelligent ADR/research suggestion capabilities - provide JSON failure info to get specific test commands

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
operationYesType of troubleshooting operation
failureNoStructured failure information (required for analyze_failure and generate_test_plan)
projectPathNoPath to project directory (optional)
adrDirectoryNoADR directory pathdocs/adrs
todoPathNoPath to TODO.md fileTODO.md
enableMemoryIntegrationNoEnable memory entity storage for troubleshooting session tracking and pattern recognition
enablePatternRecognitionNoEnable automatic pattern recognition and failure classification
enableAdrSuggestionNoEnable automatic ADR suggestion based on recurring failures
enableResearchGenerationNoEnable automatic research question generation for persistent problems
conversationContextNoRich context from the calling LLM about user goals and discussion history
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'memory integration for troubleshooting session tracking' and 'intelligent ADR/research suggestion capabilities,' hinting at persistence and automation features, but fails to detail critical behaviors like whether this tool modifies data (e.g., stores failures in memory), requires specific permissions, has rate limits, or what the output format entails. For a complex tool with 10 parameters, this leaves significant gaps in understanding its operational impact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single run-on sentence that packs multiple concepts (failure analysis, test generation, memory integration, tracking, ADR/research suggestions) without clear structure. While it avoids waste, it's not front-loaded effectively—key actions are buried in dense phrasing. A more organized breakdown would improve clarity without sacrificing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (10 parameters, nested objects, no output schema) and lack of annotations, the description is inadequate. It omits essential context such as output format, error handling, side effects (e.g., memory storage implications), and how the 'operation' parameter influences behavior. For a multifaceted troubleshooting tool, this leaves the agent under-informed about what to expect upon invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema, only implying that 'JSON failure info' is needed without elaborating on parameter interactions or usage nuances. Baseline 3 is appropriate as the schema does the heavy lifting, but the description doesn't compensate with additional semantic insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'structured failure analysis and test plan generation' with 'memory integration for troubleshooting session tracking and intelligent ADR/research suggestion capabilities.' It specifies the verb (analyze/generate) and resource (failure info), though it doesn't explicitly differentiate from sibling tools like 'analyze_deployment_progress' or 'generate_research_questions' that might overlap in troubleshooting contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance: it mentions to 'provide JSON failure info to get specific test commands,' but offers no explicit when-to-use criteria, no exclusions, and no alternatives among the many sibling tools. Usage is implied rather than clearly defined, leaving the agent to infer context from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server