Skip to main content
Glama

generate-user-stories

Creates detailed user stories with acceptance criteria from a product description, aiding in software development planning and requirements specification.

Instructions

Creates detailed user stories with acceptance criteria based on a product description and research.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
productDescriptionYesDescription of the product to create user stories for

Implementation Reference

  • Main handler/executor function that performs pre-generation research using Perplexity, generates user stories via LLM with a specific system prompt, saves the markdown output to a file, and handles background job management with SSE notifications.
    export const generateUserStories: ToolExecutor = async (
      params: Record<string, unknown>,
      config: OpenRouterConfig,
      context?: ToolExecutionContext
    ): Promise<CallToolResult> => {
      const sessionId = context?.sessionId || 'unknown-session';
      if (sessionId === 'unknown-session') {
          logger.warn({ tool: 'generateUserStories' }, 'Executing tool without a valid sessionId. SSE progress updates will not be sent.');
      }
    
      logger.debug({
        configReceived: true,
        hasLlmMapping: Boolean(config.llm_mapping),
        mappingKeys: config.llm_mapping ? Object.keys(config.llm_mapping) : []
      }, 'generateUserStories executor received config');
    
      const productDescription = params.productDescription as string;
    
      const jobId = jobManager.createJob('generate-user-stories', params);
      logger.info({ jobId, tool: 'generateUserStories', sessionId }, 'Starting background job.');
    
      const initialResponse = formatBackgroundJobInitiationResponse(
        jobId,
        'generate-user-stories',
        'User Stories Generator'
      );
    
      setImmediate(async () => {
        const logs: string[] = [];
        let filePath: string = '';
    
        try {
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Starting user stories generation process...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Starting user stories generation process...');
          logs.push(`[${new Date().toISOString()}] Starting user stories generation for: ${productDescription.substring(0, 50)}...`);
    
          await initDirectories(context);
    
          const userStoriesDir = path.join(getBaseOutputDir(context), 'user-stories-generator');
          const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
          const sanitizedName = productDescription.substring(0, 30).toLowerCase().replace(/[^a-z0-9]+/g, '-');
          const filename = `${timestamp}-${sanitizedName}-user-stories.md`;
          filePath = path.join(userStoriesDir, filename);
    
          logger.info({ jobId, inputs: { productDescription: productDescription.substring(0, 50) } }, "User Stories Generator: Starting pre-generation research...");
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Performing pre-generation research...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Performing pre-generation research...');
          logs.push(`[${new Date().toISOString()}] Starting pre-generation research.`);
    
          let researchContext = '';
        try {
          const query1 = `User personas and stakeholders for: ${productDescription}`;
          const query2 = `Common user workflows and use cases for: ${productDescription}`;
          const query3 = `User experience expectations and pain points for: ${productDescription}`;
    
          const researchResults = await Promise.allSettled([
            performResearchQuery(query1, config),
            performResearchQuery(query2, config),
            performResearchQuery(query3, config)
          ]);
    
          researchContext = "## Pre-Generation Research Context (From Perplexity Sonar Deep Research):\n\n";
    
          researchResults.forEach((result, index) => {
            const queryLabels = ["User Personas & Stakeholders", "User Workflows & Use Cases", "User Experience Expectations & Pain Points"];
            if (result.status === "fulfilled") {
              researchContext += `### ${queryLabels[index]}:\n${result.value.trim()}\n\n`;
            } else {
              logger.warn({ error: result.reason }, `Research query ${index + 1} failed`);
              researchContext += `### ${queryLabels[index]}:\n*Research on this topic failed.*\n\n`;
            }
          });
    
          logger.info({ jobId }, "User Stories Generator: Pre-generation research completed.");
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Research complete. Starting main user stories generation...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Research complete. Starting main user stories generation...');
          logs.push(`[${new Date().toISOString()}] Pre-generation research completed.`);
    
        } catch (researchError) {
          logger.error({ jobId, err: researchError }, "User Stories Generator: Error during research aggregation");
          logs.push(`[${new Date().toISOString()}] Error during research aggregation: ${researchError instanceof Error ? researchError.message : String(researchError)}`);
          researchContext = "## Pre-Generation Research Context:\n*Error occurred during research phase.*\n\n";
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Warning: Error during research phase. Continuing generation...');
        }
    
        const mainGenerationPrompt = `Create comprehensive user stories for the following product:\n\n${productDescription}\n\n${researchContext}`;
    
        logger.info({ jobId }, "User Stories Generator: Starting main generation using direct LLM call...");
        jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Generating user stories content via LLM...');
        sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Generating user stories content via LLM...');
        logs.push(`[${new Date().toISOString()}] Calling LLM for main user stories generation.`);
    
        const userStoriesMarkdown = await performFormatAwareLlmCallWithCentralizedConfig(
          mainGenerationPrompt,
          USER_STORIES_SYSTEM_PROMPT,
          'user_stories_generation',
          'markdown', // Explicitly specify markdown format
          undefined, // No schema for markdown
          0.3
        );
    
        logger.info({ jobId }, "User Stories Generator: Main generation completed.");
        jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Processing LLM response...');
        sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Processing LLM response...');
        logs.push(`[${new Date().toISOString()}] Received response from LLM.`);
    
        if (!userStoriesMarkdown || typeof userStoriesMarkdown !== 'string' || !userStoriesMarkdown.trim().startsWith('# User Stories:')) {
          logger.warn({ jobId, markdown: userStoriesMarkdown?.substring(0, 100) }, 'User stories generation returned empty or potentially invalid Markdown format.');
          logs.push(`[${new Date().toISOString()}] Validation Error: LLM output invalid format.`);
          throw new ToolExecutionError('User stories generation returned empty or invalid Markdown content.');
        }
    
        const formattedResult = `${userStoriesMarkdown}\n\n_Generated: ${new Date().toLocaleString()}_`;
    
        logger.info({ jobId }, `Saving user stories to ${filePath}...`);
        jobManager.updateJobStatus(jobId, JobStatus.RUNNING, `Saving user stories to file...`);
        sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, `Saving user stories to file...`);
        logs.push(`[${new Date().toISOString()}] Saving user stories to ${filePath}.`);
    
        await fs.writeFile(filePath, formattedResult, 'utf8');
        logger.info({ jobId }, `User stories generated and saved to ${filePath}`);
        logs.push(`[${new Date().toISOString()}] User stories saved successfully.`);
        sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, `User stories saved successfully.`);
    
        const finalResult: CallToolResult = {
          content: [{ type: "text", text: `User stories generated successfully and saved to: ${filePath}\n\n${formattedResult}` }],
          isError: false
        };
        jobManager.setJobResult(jobId, finalResult);
    
        } catch (error) {
          const errorMsg = error instanceof Error ? error.message : String(error);
          logger.error({ err: error, jobId, tool: 'generate-user-stories', params }, `User Stories Generator Error: ${errorMsg}`);
          logs.push(`[${new Date().toISOString()}] Error: ${errorMsg}`);
    
          let appError: AppError;
          const cause = error instanceof Error ? error : undefined;
          if (error instanceof AppError) {
            appError = error;
          } else {
            appError = new ToolExecutionError(`Failed to generate user stories: ${errorMsg}`, { params, filePath }, cause);
          }
    
          const mcpError = new McpError(ErrorCode.InternalError, appError.message, appError.context);
          const errorResult: CallToolResult = {
            content: [{ type: 'text', text: `Error during background job ${jobId}: ${mcpError.message}\n\nLogs:\n${logs.join('\n')}` }],
            isError: true,
            errorDetails: mcpError
          };
    
          jobManager.setJobResult(jobId, errorResult);
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.FAILED, `Job failed: ${mcpError.message}`);
        }
      });
    
      return initialResponse;
    };
  • Zod schema defining the input parameters for the tool (productDescription).
    const userStoriesInputSchemaShape = {
      productDescription: z.string().min(10, { message: "Product description must be at least 10 characters." }).describe("Description of the product to create user stories for")
    };
  • Tool definition object and call to registerTool, which adds it to the dynamic registry used by server.ts.
    const userStoriesToolDefinition: ToolDefinition = {
      name: "user-stories-generator",
      description: "Creates detailed user stories with acceptance criteria based on a product description and research.",
      inputSchema: userStoriesInputSchemaShape,
      executor: generateUserStories
    };
    
    registerTool(userStoriesToolDefinition);
  • Detailed system prompt for the LLM to generate well-structured user stories in Markdown format, incorporating research context.
    const USER_STORIES_SYSTEM_PROMPT = `
    # User Stories Generator - Using Research Context
    
    # ROLE & GOAL
    You are an expert Agile Business Analyst and Product Owner AI assistant. Your goal is to generate a comprehensive and well-structured set of User Stories, including Epics and Acceptance Criteria, in Markdown format.
    
    # CORE TASK
    Generate detailed user stories based on the user's product description and the provided research context.
    
    # INPUT HANDLING
    - Analyze the 'productDescription' to understand the product's purpose, core features, and intended value.
    - You will also receive 'Pre-Generation Research Context'.
    
    # RESEARCH CONTEXT INTEGRATION
    - **CRITICAL:** Carefully review the '## Pre-Generation Research Context (From Perplexity Sonar Deep Research)' section provided in the user prompt.
    - This section contains insights on: User Personas & Stakeholders, User Workflows & Use Cases, and User Experience Expectations & Pain Points.
    - **Use these insights** heavily to:
        - Define realistic 'As a [user type/persona]' roles based on the research.
        - Create stories that address identified 'User Workflows & Use Cases'.
        - Ensure stories align with 'User Experience Expectations' and address 'Pain Points'.
        - Inform the 'Priority' and 'Value/Benefit' parts of the stories.
    - **Synthesize**, don't just list research findings. Create user stories that *embody* the research.
    
    # OUTPUT FORMAT & STRUCTURE (Strict Markdown)
    - Your entire response **MUST** be valid Markdown.
    - Start **directly** with the main title: '# User Stories: [Inferred Product Name]'
    - Organize stories hierarchically using Markdown headings:
        - \`## Epic: [Epic Title]\` (e.g., \`## Epic: User Authentication\`)
        - \`### User Story: [Story Title]\` (e.g., \`### User Story: User Registration\`)
    - For **each User Story**, use the following precise template within its \`###\` section:
    
      **ID:** US-[auto-incrementing number, e.g., US-101]
      **Title:** [Concise Story Title]
    
      **Story:**
      > As a **[User Role/Persona - informed by research]**,
      > I want to **[perform an action or achieve a goal]**
      > So that **[I gain a specific benefit - linked to user needs/pain points from research]**.
    
      **Acceptance Criteria:**
      *   GIVEN [precondition/context] WHEN [action is performed] THEN [expected, testable outcome].
      *   GIVEN [another context] WHEN [different action] THEN [another outcome].
      *   *(Provide multiple, specific, measurable criteria)*
    
      **Priority:** [High | Medium | Low - informed by perceived value/dependencies/research]
      **Dependencies:** [List of User Story IDs this depends on, e.g., US-100 | None]
      **(Optional) Notes:** [Any clarifying details or technical considerations.]
    
    # QUALITY ATTRIBUTES
    - **INVEST Principles:** Ensure stories are Independent, Negotiable, Valuable, Estimable, Small (appropriately sized), and Testable (via Acceptance Criteria).
    - **User-Centric:** Focus on user roles, actions, and benefits, informed by research personas and needs.
    - **Clear Acceptance Criteria:** Criteria must be specific, unambiguous, and testable.
    - **Comprehensive:** Cover the core functionality implied by the description and research workflows.
    - **Well-Structured:** Adhere strictly to the Epic/Story hierarchy and template format.
    - **Consistent:** Use consistent terminology and formatting.
    
    # CONSTRAINTS (Do NOT Do the Following)
    - **NO Conversational Filler:** Start directly with the '# User Stories: ...' title. No intros, summaries, or closings.
    - **NO Markdown Violations:** Strictly adhere to the specified Markdown format (headings, blockquotes for the story, lists for AC).
    - **NO Implementation Details:** Focus on *what* the user needs, not *how* it will be built (unless specified in 'Notes').
    - **NO External Knowledge:** Base stories *only* on the provided inputs and research context.
    - **NO Process Commentary:** Do not mention the research process in the output.
    - **Strict Formatting:** Use \`##\` for Epics, \`###\` for Stories. Use the exact field names (ID, Title, Story, Acceptance Criteria, etc.) in bold. Use Markdown blockquotes for the As a/I want/So that structure.
    `;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'creates' user stories, implying a generative/mutation operation, but doesn't address permissions, side effects, rate limits, or output format. This leaves significant gaps for a tool that likely produces structured content.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It directly communicates the tool's function and scope, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the generated user stories look like, how many are produced, or any behavioral traits like error handling. For a generative tool with no structured output documentation, this leaves the agent with insufficient information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal value beyond the input schema, which has 100% coverage. It mentions 'product description and research' as inputs, but the schema only includes 'productDescription' (with a description field). The description doesn't clarify the 'research' aspect or provide additional context for the parameter's use.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Creates detailed user stories with acceptance criteria based on a product description and research.' It specifies the verb ('creates'), resource ('user stories'), and scope ('detailed' with 'acceptance criteria'), though it doesn't explicitly differentiate from sibling tools like 'generate-prd' or 'generate-task-list'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose this over sibling tools such as 'generate-prd' (for product requirements) or 'generate-task-list' (for tasks), nor does it specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/freshtechbro/vibe-coder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server