Skip to main content
Glama

generate-rules

Automate the creation of project-specific development rules tailored to product descriptions, user stories, and predefined categories using AI-powered tools for streamlined software development workflows.

Instructions

Creates project-specific development rules based on product description, user stories, and research.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
productDescriptionYesDescription of the product being developed
ruleCategoriesNoOptional categories of rules to generate (e.g., 'Code Style', 'Security')
userStoriesNoOptional user stories to inform the rules

Implementation Reference

  • The generateRules function is the core ToolExecutor (handler) for the 'generate-rules' tool. It performs pre-generation research using Perplexity, generates Markdown-formatted development rules via LLM, saves them to a file, manages background job status, and handles SSE notifications.
    export const generateRules: ToolExecutor = async ( params: Record<string, unknown>, // More type-safe than 'any' config: OpenRouterConfig, context?: ToolExecutionContext // Add context parameter ): Promise<CallToolResult> => { // ---> Step 2.5(Rules).2: Inject Dependencies & Get Session ID <--- const sessionId = context?.sessionId || 'unknown-session'; if (sessionId === 'unknown-session') { logger.warn({ tool: 'generateRules' }, 'Executing tool without a valid sessionId. SSE progress updates will not be sent.'); } // Log the config received by the executor logger.debug({ configReceived: true, hasLlmMapping: Boolean(config.llm_mapping), mappingKeys: config.llm_mapping ? Object.keys(config.llm_mapping) : [] }, 'generateRules executor received config'); // Access properties via params, asserting types as they've been validated by executeTool const productDescription = params.productDescription as string; const userStories = params.userStories as string | undefined; const ruleCategories = params.ruleCategories as string[] | undefined; // ---> Step 2.5(Rules).3: Create Job & Return Job ID <--- const jobId = jobManager.createJob('generate-rules', params); logger.info({ jobId, tool: 'generateRules', sessionId }, 'Starting background job.'); // Use the shared service to format the initial response const initialResponse = formatBackgroundJobInitiationResponse( jobId, 'generate-rules', // Internal tool name 'Rules Generator' // User-friendly display name ); // ---> Step 2.5(Rules).4: Wrap Logic in Async Block <--- setImmediate(async () => { const logs: string[] = []; // Keep logs specific to this job execution let filePath: string = ''; // Define filePath in outer scope for catch block // ---> Step 2.5(Rules).7: Update Final Result/Error Handling (Try Block Start) <--- try { // ---> Step 2.5(Rules).6: Add Progress Updates (Initial) <--- jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Starting rules generation process...'); sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Starting rules generation process...'); logs.push(`[${new Date().toISOString()}] Starting rules generation for: ${productDescription.substring(0, 50)}...`); // Ensure directories are initialized before writing await initDirectories(context); // Generate a filename for storing the rules const rulesDir = path.join(getBaseOutputDir(context), 'rules-generator'); const timestamp = new Date().toISOString().replace(/[:.]/g, '-'); const sanitizedName = productDescription.substring(0, 30).toLowerCase().replace(/[^a-z0-9]+/g, '-'); const filename = `${timestamp}-${sanitizedName}-rules.md`; filePath = path.join(rulesDir, filename); // Assign to outer scope variable // ---> Step 2.5(Rules).6: Add Progress Updates (Research Start) <--- logger.info({ jobId, inputs: { productDescription: productDescription.substring(0, 50), userStories: userStories?.substring(0, 50), ruleCategories } }, "Rules Generator: Starting pre-generation research..."); jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Performing pre-generation research...'); sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Performing pre-generation research...'); logs.push(`[${new Date().toISOString()}] Starting pre-generation research.`); let researchContext = ''; try { // Define relevant research queries const query1 = `Best development practices and coding standards for building: ${productDescription}`; const query2 = ruleCategories && ruleCategories.length > 0 ? `Specific rules and guidelines for these categories in software development: ${ruleCategories.join(', ')}` : `Common software development rule categories for: ${productDescription}`; // Extract product type for the third query const productTypeLowercase = productDescription.toLowerCase(); let productType = "software application"; if (productTypeLowercase.includes("web") || productTypeLowercase.includes("website")) { productType = "web application"; } else if (productTypeLowercase.includes("mobile") || productTypeLowercase.includes("app")) { productType = "mobile application"; } else if (productTypeLowercase.includes("api")) { productType = "API service"; } else if (productTypeLowercase.includes("game")) { productType = "game"; } const query3 = `Modern architecture patterns and file organization for ${productType} development`; // Execute research queries in parallel using Perplexity const researchResults = await Promise.allSettled([ performResearchQuery(query1, config), // Uses config.perplexityModel (perplexity/sonar-deep-research) performResearchQuery(query2, config), performResearchQuery(query3, config) ]); // Process research results researchContext = "## Pre-Generation Research Context (From Perplexity Sonar Deep Research):\n\n"; // Add results that were fulfilled researchResults.forEach((result, index) => { const queryLabels = ["Best Practices", "Rule Categories", "Architecture Patterns"]; if (result.status === "fulfilled") { researchContext += `### ${queryLabels[index]}:\n${result.value.trim()}\n\n`; } else { logger.warn({ error: result.reason }, `Research query ${index + 1} failed`); researchContext += `### ${queryLabels[index]}:\n*Research on this topic failed.*\n\n`; } }); // ---> Step 2.5(Rules).6: Add Progress Updates (Research End) <--- logger.info({ jobId }, "Rules Generator: Pre-generation research completed."); jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Research complete. Starting main rules generation...'); sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Research complete. Starting main rules generation...'); logs.push(`[${new Date().toISOString()}] Pre-generation research completed.`); } catch (researchError) { logger.error({ jobId, err: researchError }, "Rules Generator: Error during research aggregation"); logs.push(`[${new Date().toISOString()}] Error during research aggregation: ${researchError instanceof Error ? researchError.message : String(researchError)}`); // Include error in context but continue researchContext = "## Pre-Generation Research Context:\n*Error occurred during research phase.*\n\n"; sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Warning: Error during research phase. Continuing generation...'); } // Create the main generation prompt with combined research and inputs let mainGenerationPrompt = `Create a comprehensive set of development rules for the following product:\n\n${productDescription}`; if (userStories) { mainGenerationPrompt += `\n\nBased on these user stories:\n\n${userStories}`; } if (ruleCategories && ruleCategories.length > 0) { // Add explicit type 'string' for c mainGenerationPrompt += `\n\nFocus on these rule categories:\n${ruleCategories.map((c: string) => `- ${c}`).join('\n')}`; } // Add research context to the prompt mainGenerationPrompt += `\n\n${researchContext}`; // ---> Step 2.5(Rules).6: Add Progress Updates (LLM Call Start) <--- logger.info({ jobId }, "Rules Generator: Starting main generation using direct LLM call..."); jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Generating rules content via LLM...'); sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Generating rules content via LLM...'); logs.push(`[${new Date().toISOString()}] Calling LLM for main rules generation.`); const rulesMarkdown = await performFormatAwareLlmCallWithCentralizedConfig( mainGenerationPrompt, RULES_SYSTEM_PROMPT, // Pass the system prompt 'rules_generation', // Logical task name 'markdown', // Explicitly specify markdown format undefined, // No schema for markdown 0.2 // Low temperature for structured rules ); // ---> Step 2.5(Rules).6: Add Progress Updates (LLM Call End) <--- logger.info({ jobId }, "Rules Generator: Main generation completed."); jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Processing LLM response...'); sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Processing LLM response...'); logs.push(`[${new Date().toISOString()}] Received response from LLM.`); // Basic validation: Check if the output looks like Markdown and contains expected elements if (!rulesMarkdown || typeof rulesMarkdown !== 'string' || !rulesMarkdown.trim().startsWith('# Development Rules:')) { logger.warn({ jobId, markdown: rulesMarkdown?.substring(0, 100) }, 'Rules generation returned empty or potentially invalid Markdown format.'); logs.push(`[${new Date().toISOString()}] Validation Error: LLM output invalid format.`); throw new ToolExecutionError('Rules generation returned empty or invalid Markdown content.'); } // Format the rules (already should be formatted by LLM, just add timestamp) const formattedResult = `${rulesMarkdown}\n\n_Generated: ${new Date().toLocaleString()}_`; // ---> Step 2.5(Rules).6: Add Progress Updates (Saving File) <--- logger.info({ jobId }, `Saving rules to ${filePath}...`); jobManager.updateJobStatus(jobId, JobStatus.RUNNING, `Saving rules to file...`); sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, `Saving rules to file...`); logs.push(`[${new Date().toISOString()}] Saving rules to ${filePath}.`); // Save the result await fs.writeFile(filePath, formattedResult, 'utf8'); logger.info({ jobId }, `Rules generated and saved to ${filePath}`); logs.push(`[${new Date().toISOString()}] Rules saved successfully.`); sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, `Rules saved successfully.`); // ---> Step 2.5(Rules).7: Update Final Result/Error Handling (Set Success Result) <--- const finalResult: CallToolResult = { // Include file path in success message content: [{ type: "text", text: `Development rules generated successfully and saved to: ${filePath}\n\n${formattedResult}` }], isError: false }; jobManager.setJobResult(jobId, finalResult); // Optional explicit SSE: sseNotifier.sendProgress(sessionId, jobId, JobStatus.COMPLETED, 'Rules generation completed successfully.'); // ---> Step 2.5(Rules).7: Update Final Result/Error Handling (Catch Block) <--- } catch (error) { const errorMsg = error instanceof Error ? error.message : String(error); logger.error({ err: error, jobId, tool: 'generate-rules', params }, `Rules Generator Error: ${errorMsg}`); logs.push(`[${new Date().toISOString()}] Error: ${errorMsg}`); // Handle specific errors from direct call or research let appError: AppError; const cause = error instanceof Error ? error : undefined; if (error instanceof AppError) { appError = error; } else { appError = new ToolExecutionError(`Failed to generate development rules: ${errorMsg}`, { params, filePath }, cause); } const mcpError = new McpError(ErrorCode.InternalError, appError.message, appError.context); const errorResult: CallToolResult = { content: [{ type: 'text', text: `Error during background job ${jobId}: ${mcpError.message}\n\nLogs:\n${logs.join('\n')}` }], isError: true, errorDetails: mcpError }; // Store error result in Job Manager jobManager.setJobResult(jobId, errorResult); // Send final failed status via SSE (optional if jobManager handles it) sseNotifier.sendProgress(sessionId, jobId, JobStatus.FAILED, `Job failed: ${mcpError.message}`); } }); // ---> END OF setImmediate WRAPPER <--- return initialResponse; // Return the initial response with Job ID };
  • Zod schema definition for the input parameters of the generate-rules tool.
    const rulesInputSchemaShape = { productDescription: z.string().min(10, { message: "Product description must be at least 10 characters." }).describe("Description of the product being developed"), userStories: z.string().optional().describe("Optional user stories to inform the rules"), ruleCategories: z.array(z.string()).optional().describe("Optional categories of rules to generate (e.g., 'Code Style', 'Security')") };
  • Registration of the rules-generator tool (internally referred to as 'generate-rules') with the central tool registry, which is then dynamically registered with the MCP server.
    const rulesToolDefinition: ToolDefinition = { name: "rules-generator", description: "Creates project-specific development rules based on product description, user stories, and research.", inputSchema: rulesInputSchemaShape, // Use the raw shape executor: generateRules // Reference the adapted function }; // Register the tool with the central registry registerTool(rulesToolDefinition);
  • src/server.ts:293-396 (registration)
    Dynamic registration loop in MCP server that registers all tools from the registry, including rules-generator (generate-rules).
    for (const definition of allToolDefinitions) { logger.debug(`Registering tool "${definition.name}" with MCP server.`); server.tool( definition.name, definition.description, // Pass the raw shape directly, as expected by server.tool definition.inputSchema, // The handler now integrates state management async (params: Record<string, unknown>, extra?: unknown): Promise<CallToolResult> => { // Log the config object available within this closure logger.debug({ configInHandler: loadedConfigParam }, 'Tool handler closure using config object.'); // Use loadedConfigParam // --- Context Creation START --- // Extract session ID from extra or generate a unique one let sessionId = 'placeholder-session-id'; let transportType = 'unknown'; // Check if extra contains transport information if (extra && typeof extra === 'object') { // Try to get session ID from extra if ('sessionId' in extra && typeof extra.sessionId === 'string') { sessionId = extra.sessionId; } else if ('req' in extra && extra.req && typeof extra.req === 'object') { // Try to get session ID from request const req = extra.req as { query?: { sessionId?: string }, body?: { session_id?: string }, headers?: { 'x-session-id'?: string } }; if (req.query && req.query.sessionId) { sessionId = req.query.sessionId as string; } else if (req.body && req.body.session_id) { sessionId = req.body.session_id as string; } else if (req.headers && req.headers['x-session-id']) { sessionId = req.headers['x-session-id'] as string; } } // Try to get transport type from extra if ('transportType' in extra && typeof extra.transportType === 'string') { transportType = extra.transportType; } } // If we still have the placeholder, generate a unique ID for stdio transport if (sessionId === 'placeholder-session-id') { // For stdio transport, use a fixed session ID with a prefix sessionId = 'stdio-session'; transportType = 'stdio'; logger.warn({ toolName: definition.name }, "Using stdio session ID. SSE notifications will be limited to polling."); } const context: ToolExecutionContext = { sessionId, transportType }; logger.debug({ toolName: definition.name, sessionId: context.sessionId, transportType: context.transportType }, "Server handler executing tool with context"); // --- Context Creation END --- // Create a fresh deep copy specifically for this execution to prevent closure/reference issues let executionConfig: OpenRouterConfig; try { // Ensure loadedConfigParam and its llm_mapping are handled correctly during copy const configToCopy = { ...loadedConfigParam, llm_mapping: loadedConfigParam.llm_mapping || {} // Ensure mapping exists before stringify }; executionConfig = JSON.parse(JSON.stringify(configToCopy)); logger.debug({ configForExecution: executionConfig }, 'Deep copied config for executeTool call.'); } catch (copyError) { logger.error({ err: copyError }, 'Failed to deep copy config in handler. Using original reference (may cause issues).'); executionConfig = loadedConfigParam; // Fallback, but log error } // Execute the tool, passing the created context and the *freshly copied* config const result = await executeTool(definition.name, params, executionConfig, context); // --- State Management Integration START (Keep this part for now) --- // Store the current interaction (tool call + response) // Ensure 'result' has a timestamp - add it if executeTool doesn't const responseWithTimestamp = { ...result, timestamp: Date.now(), }; addInteraction(sessionId, { toolCall: { name: definition.name, params: params, // Using current time as message timestamp isn't available on 'extra' timestamp: Date.now() }, response: responseWithTimestamp, }); // --- State Management Integration END --- return result; // Return the result from the tool execution } ); } logger.info(`Registered ${allToolDefinitions.length} tools dynamically with MCP server.`);

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/freshtechbro/vibe-coder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server