Skip to main content
Glama

generate-fullstack-starter-kit

Generate full-stack project starter kits tailored to specific use cases, with customizable tech stacks, optional features, and research-informed recommendations for efficient development.

Instructions

Generates full-stack project starter kits with custom tech stacks, research-informed recommendations, and setup scripts.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
include_optional_featuresNoOptional features to include (e.g., ['Docker', 'CI/CD'])
request_recommendationNoWhether to request recommendations for tech stack components based on research
tech_stack_preferencesNoOptional tech stack preferences (e.g., { frontend: 'Vue', backend: 'Python' })
use_caseYesThe specific use case for the starter kit (e.g., 'E-commerce site', 'Blog platform')

Implementation Reference

  • Main tool handler function that orchestrates the entire fullstack starter kit generation process, using the name 'generate-fullstack-starter-kit' for internal job tracking.
    export const generateFullstackStarterKit: ToolExecutor = async (
      params: Record<string, unknown>,
      config: OpenRouterConfig,
      context?: ToolExecutionContext
    ): Promise<CallToolResult> => {
      const sessionId = context?.sessionId || 'unknown-session';
      if (sessionId === 'unknown-session') {
          logger.warn({ tool: 'generateFullstackStarterKit' }, 'Executing tool without a valid sessionId. SSE progress updates will not be sent.');
      }
    
      logger.debug({
        configReceived: true,
        hasLlmMapping: Boolean(config.llm_mapping),
        mappingKeys: config.llm_mapping ? Object.keys(config.llm_mapping) : []
      }, 'generateFullstackStarterKit executor received config');
    
      if (!params.use_case || typeof params.use_case !== 'string') {
        return {
          content: [{ type: 'text', text: 'Error: Missing or invalid required parameter "use_case"' }],
          isError: true
        };
      }
    
      const input = params as unknown as FullstackStarterKitInput;
    
      const jobId = jobManager.createJob('generate-fullstack-starter-kit', params);
      logger.info({ jobId, tool: 'generateFullstackStarterKit', sessionId }, 'Starting background job.');
    
      const initialResponse = formatBackgroundJobInitiationResponse(
        jobId,
        'generate-fullstack-starter-kit',
        'Fullstack Starter Kit Generator'
      );
    
      setImmediate(async () => {
        const logs: string[] = [];
        let validatedDefinition: StarterKitDefinition | undefined;
        const yamlComposer = new YAMLComposer(config);
    
        try {
          logger.info({ jobId }, `Starting Fullstack Starter Kit Generator background job for use case: ${input.use_case}`);
          logs.push(`[${new Date().toISOString()}] Starting Fullstack Starter Kit Generator for ${input.use_case}`);
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Initializing starter kit generation...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Initializing...');
    
          let researchContext = '';
          if (input.request_recommendation) {
            logger.info({ jobId }, "Performing comprehensive pre-generation research...");
            sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Performing comprehensive research...');
            jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Performing comprehensive research...');
    
            // Enhanced research with 3 comprehensive queries (aligns with research manager's maxConcurrentRequests: 3)
            const researchQueries = [
              `Current technology stack recommendations, best practices, and architecture patterns for ${input.use_case}. Include latest versions, performance considerations, scalability factors, and industry adoption trends.`,
              `Essential features, user experience patterns, security requirements, and integration capabilities needed for ${input.use_case}. Focus on must-have vs nice-to-have features, accessibility standards, and compliance requirements.`,
              `Development workflow, deployment strategies, testing approaches, and DevOps practices for ${input.use_case}. Include CI/CD recommendations, monitoring solutions, and production readiness considerations.`
            ];
    
            logger.debug({ jobId, queryCount: researchQueries.length }, "Executing enhanced research queries in parallel");
    
            const researchResults = await Promise.all(
              researchQueries.map((query, index) =>
                performResearchQuery(query, config).then(result => ({
                  index,
                  query: query.substring(0, 100) + '...',
                  result: result.trim()
                }))
              )
            );
    
            researchContext = "## Comprehensive Pre-Generation Research Context:\n\n" +
              researchResults.map((r, i) =>
                `### Research Area ${i + 1}: ${['Technology & Architecture', 'Features & Requirements', 'Development & Deployment'][i]}\n${r.result}`
              ).join("\n\n");
    
            logger.info({ jobId, researchResultsCount: researchResults.length, totalLength: researchContext.length }, "Enhanced research completed successfully");
            logs.push(`[${new Date().toISOString()}] Enhanced research completed with ${researchResults.length} comprehensive queries.`);
            sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Comprehensive research complete.');
          }
    
          await initDirectories(context);
    
          const moduleSelectionPrompt = `
    You are an expert Full-Stack Software Architect AI. Based on the user's request and comprehensive research context, select the appropriate YAML module templates and provide necessary parameters to compose a full-stack starter kit.
    
    User Request:
    - Use Case: ${input.use_case}
    - Tech Stack Preferences: ${JSON.stringify(input.tech_stack_preferences || {}, null, 2)}
    - Optional Features: ${JSON.stringify(input.include_optional_features || [], null, 2)}
    
    ${researchContext}
    
    ## Research-Driven Module Selection Guidelines:
    
    Based on the research context above, ensure your module selections incorporate:
    1. **Technology Choices**: Use the latest recommended versions and best practices identified in the research
    2. **Architecture Patterns**: Apply the architectural patterns and scalability considerations mentioned in the research
    3. **Feature Requirements**: Include essential features and integrations identified as must-haves in the research
    4. **Development Workflow**: Select modules that support the recommended development, testing, and deployment practices
    5. **Production Readiness**: Ensure selected modules align with the monitoring, security, and compliance requirements from research
    
    When selecting modules, prioritize those that:
    - Align with current industry trends and adoption patterns from the research
    - Support the performance and scalability requirements identified
    - Include the security and compliance features mentioned in the research
    - Enable the recommended CI/CD and DevOps practices
    
    Available YAML Module Categories (and example templates):
    - Frontend: 'frontend/react-vite', 'frontend/vue-nuxt', 'frontend/angular-cli', 'frontend/nextjs', 'frontend/svelte-kit'
    - Backend: 'backend/nodejs-express', 'backend/python-django', 'backend/java-spring', 'backend/python-fastapi', 'backend/nodejs-nestjs'
    - Database: 'database/postgres', 'database/mongodb', 'database/mysql', 'database/supabase', 'database/firebase'
    - Authentication: 'auth/jwt', 'auth/oauth2-scaffold', 'auth/firebase-auth', 'auth/supabase-auth', 'auth/auth0'
    - Deployment: 'deployment/docker-compose', 'deployment/kubernetes-scaffold', 'deployment/vercel', 'deployment/netlify'
    - Utility: 'utility/logging-winston', 'utility/payment-stripe-sdk', 'utility/email-sendgrid', 'utility/voice-recognition-web-api', 'utility/calendar-integration-google', 'utility/push-notifications-web'
    
    CRITICAL: You must respond with EXACTLY this JSON structure. No markdown, no code blocks, no explanations:
    
    {
      "globalParams": {
        "projectName": "string (kebab-case, derived from use case)",
        "projectDescription": "string (detailed description of the project)",
        "frontendPath": "string (default: 'client')",
        "backendPath": "string (default: 'server')",
        "backendPort": 3001,
        "frontendPort": 3000
      },
      "moduleSelections": [
        {
          "modulePath": "string (exact module path from categories above)",
          "moduleKey": "string (use 'frontendPath', 'backendPath', or 'root')",
          "params": {}
        }
      ]
    }
    
    REQUIREMENTS:
    1. projectName must be kebab-case (e.g., "productivity-project-app")
    2. Include at least: frontend, backend, database modules
    3. Add authentication if the use case requires user management
    4. Add utility modules based on specific features mentioned
    5. Use "root" for moduleKey when module applies to project root
    6. Use "frontendPath" for frontend modules, "backendPath" for backend modules
    
    Select a comprehensive set of modules for: ${input.use_case}
    
    RESPOND WITH ONLY THE JSON OBJECT - NO OTHER TEXT OR FORMATTING.`;
    
          logger.info({ jobId }, 'Prompting LLM for YAML module selections and parameters...');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Determining project components...');
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Determining project components...');
    
          // Determine schema complexity based on use case and research context
          const isComplexProject = determineProjectComplexity(input.use_case, researchContext, input.tech_stack_preferences);
          const selectedSchema = isComplexProject ? enhancedModuleSelectionResponseSchema : moduleSelectionResponseSchema;
          const schemaType = isComplexProject ? 'enhanced' : 'standard';
    
          logger.info({ jobId, isComplexProject, schemaType }, 'Selected module selection schema based on project complexity');
    
          // Try new schema-aware approach first, fallback to existing method
          let llmModuleSelections: ModuleSelectionResponse;
          // const _usedSchemaAware = false; // Removed unused variable
    
          try {
            logger.debug({ jobId, schemaType }, 'Attempting schema-aware module selection...');
    
            const schemaAwareResult = await performModuleSelectionCall(
              moduleSelectionPrompt,
              '', // System prompt is part of main prompt for this call
              config,
              selectedSchema
            );
    
            llmModuleSelections = schemaAwareResult.data;
            // Successfully used schema-aware call
    
            logger.info({
              jobId,
              attempts: schemaAwareResult.attempts,
              hadRetries: schemaAwareResult.hadRetries,
              processingTimeMs: schemaAwareResult.processingTimeMs,
              responseLength: schemaAwareResult.rawResponse.length
            }, 'Schema-aware module selection successful');
    
            logs.push(`[${new Date().toISOString()}] Schema-aware LLM response received (${schemaAwareResult.attempts} attempts, ${schemaAwareResult.processingTimeMs}ms).`);
    
          } catch (schemaError) {
            logger.warn({
              jobId,
              error: schemaError instanceof Error ? schemaError.message : String(schemaError)
            }, 'Schema-aware approach failed, falling back to existing method');
    
            // Fallback to existing approach
            const llmModuleResponseRaw = await performFormatAwareLlmCallWithCentralizedConfig(
              moduleSelectionPrompt,
              '', // System prompt is part of main prompt for this call
              'fullstack_starter_kit_module_selection',
              'json', // Explicitly specify JSON format
              undefined, // Schema will be inferred from task name
              0.1
            );
            logs.push(`[${new Date().toISOString()}] Fallback LLM response for module selection received.`);
            logger.debug({ jobId, rawLlmResponse: llmModuleResponseRaw }, "Raw LLM response for module selection (fallback)");
    
            try {
              // Use intelligent parsing with validation-first approach for module selection
              const { intelligentJsonParse } = await import('../../utils/llmHelper.js');
              const parsed = intelligentJsonParse(llmModuleResponseRaw, `module-selection-${jobId}`);
    
              // Validate the parsed result using the appropriate schema
              const validation = isComplexProject
                ? validateEnhancedModuleSelectionWithErrors(parsed)
                : validateModuleSelectionWithErrors(parsed);
    
              if (!validation.success) {
                throw new ValidationError(
                  `Module selection validation failed (${schemaType} schema): ${validation.errors?.join('; ')}`,
                  undefined, // No Zod issues available from our custom validation
                  { rawResponse: llmModuleResponseRaw, validationErrors: validation.errors, schemaType }
                );
              }
    
              llmModuleSelections = validation.data!;
              logger.debug({ jobId, responseLength: llmModuleResponseRaw.length, parsedSize: JSON.stringify(llmModuleSelections).length }, "Fallback parsing and validation successful");
    
            } catch (e) {
              throw new ParsingError("Failed to parse LLM response for module selections as JSON.", { rawResponse: llmModuleResponseRaw }, e instanceof Error ? e : undefined);
            }
          }
    
          logger.info({ jobId, selections: llmModuleSelections }, 'LLM module selections parsed.');
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Project components identified. Assembling kit...');
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Assembling kit from components...');
    
          const composedDefinition = await yamlComposer.compose(
            llmModuleSelections.moduleSelections,
            llmModuleSelections.globalParams,
            researchContext
          );
          logs.push(`[${new Date().toISOString()}] YAML modules composed into a single definition.`);
          logger.info({ jobId }, 'Successfully composed starter kit definition from YAML modules.');
    
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Validating final kit definition...');
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Validating final kit definition...');
          const validationResultFinal = starterKitDefinitionSchema.safeParse(composedDefinition);
          if (!validationResultFinal.success) {
            logger.error({ jobId, errors: validationResultFinal.error.issues, composedDefinition }, "Final composed definition failed schema validation");
            throw new ValidationError('Final composed definition (from YAML) failed schema validation.', validationResultFinal.error.issues, { composedDefinition });
          }
          validatedDefinition = validationResultFinal.data;
          logs.push(`[${new Date().toISOString()}] Final definition validated successfully.`);
          logger.info({ jobId }, 'Final starter kit definition validated successfully.');
    
          const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
          const sanitizedName = (validatedDefinition.projectName || input.use_case.substring(0, 30)).toLowerCase().replace(/[^a-z0-9]+/g, '-');
          const definitionFilename = `${timestamp}-${sanitizedName}-definition.json`;
          const starterKitDir = path.join(getBaseOutputDir(context), 'fullstack-starter-kit-generator');
          const definitionFilePath = path.join(starterKitDir, definitionFilename);
    
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Saving kit definition file...');
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Saving kit definition file...');
          await fs.writeJson(definitionFilePath, validatedDefinition, { spaces: 2 });
          logs.push(`[${new Date().toISOString()}] Saved validated definition to ${definitionFilename}`);
          logger.info({ jobId }, `Saved validated definition to ${definitionFilename}`);
    
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.RUNNING, 'Generating setup scripts...');
          jobManager.updateJobStatus(jobId, JobStatus.RUNNING, 'Generating setup scripts...');
          const scripts: ScriptOutput = generateSetupScripts(validatedDefinition, definitionFilename);
          const scriptShFilename = `${timestamp}-${sanitizedName}-setup.sh`;
          const scriptBatFilename = `${timestamp}-${sanitizedName}-setup.bat`;
          const scriptShFilePath = path.join(starterKitDir, scriptShFilename);
          const scriptBatFilePath = path.join(starterKitDir, scriptBatFilename);
          await fs.writeFile(scriptShFilePath, scripts.sh, { mode: 0o755 });
          await fs.writeFile(scriptBatFilePath, scripts.bat);
          logs.push(`[${new Date().toISOString()}] Saved setup scripts: ${scriptShFilename}, ${scriptBatFilename}`);
          logger.info({ jobId }, `Saved setup scripts to ${starterKitDir}`);
    
          const responseText = `
    # Fullstack Starter Kit Generator (YAML Composed)
    
    ## Project: ${validatedDefinition.projectName}
    ${validatedDefinition.description}
    
    ## Tech Stack Overview
    ${Object.entries(validatedDefinition.techStack).map(([key, tech]) =>
      `- **${key}**: ${tech.name}${tech.version ? ` (${tech.version})` : ''} - ${tech.rationale}`
    ).join('\n')}
    
    ## Project Structure Generation
    Setup scripts and the full definition JSON have been generated:
    * **Definition JSON:** \`VibeCoderOutput/fullstack-starter-kit-generator/${definitionFilename}\`
    * **Linux/macOS Script:** \`VibeCoderOutput/fullstack-starter-kit-generator/${scriptShFilename}\`
    * **Windows Script:** \`VibeCoderOutput/fullstack-starter-kit-generator/${scriptBatFilename}\`
    
    To use these scripts:
    1. Navigate to an empty directory outside this project.
    2. Copy the chosen script (\`.sh\` or \`.bat\`) and the definition JSON (\`${definitionFilename}\`) into that directory.
    3. The scripts will expect \`${definitionFilename}\` to be in the same directory they are run from.
    4. For Linux/macOS: \`chmod +x ${scriptShFilename} && ./${scriptShFilename}\`
    5. For Windows: Double-click \`${scriptBatFilename}\` or run it from the command prompt.
    
    The scripts will unpack the JSON definition to:
    - Create the project directory structure.
    - Generate all necessary files with content or generation prompts from the YAML modules.
    - Install dependencies as specified.
    - Run setup commands.
    
    ## Next Steps
    ${validatedDefinition.nextSteps.map(step => `- ${step}`).join('\n')}
    
    Generated by Fullstack Starter Kit Generator using YAML module composition.
    If any modules were dynamically generated because their templates were missing, they have been saved to the templates directory for future use.
    `;
    
          jobManager.setJobResult(jobId, { content: [{ type: "text", text: responseText }], isError: false });
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.COMPLETED, 'Starter kit generated successfully.');
    
        } catch (error: unknown) {
          const errorMsg = error instanceof Error ? error.message : String(error);
          logger.error({ err: error, jobId, tool: 'generateFullstackStarterKit' }, 'Error during background job execution.');
          logs.push(`[${new Date().toISOString()}] Error: ${errorMsg}`);
    
          let appError: AppError;
          if (error instanceof AppError) {
            appError = error;
          } else if (error instanceof Error) {
            appError = new ToolExecutionError(`Background job ${jobId} failed: ${errorMsg}`, undefined, error);
          } else {
            appError = new ToolExecutionError(`Background job ${jobId} failed with unknown error: ${errorMsg}`);
          }
    
          const mcpError = new McpError(ErrorCode.InternalError, appError.message, appError.context);
          jobManager.setJobResult(jobId, {
            content: [{ type: 'text', text: `Error in job ${jobId}: ${mcpError.message}\n\nFull Error: ${appError.stack}\n\nLogs:\n${logs.join('\n')}` }],
            isError: true,
            errorDetails: mcpError
          });
          sseNotifier.sendProgress(sessionId, jobId, JobStatus.FAILED, `Job failed: ${mcpError.message}`);
        }
      });
    
      return initialResponse;
    };
  • Input schema shape used in the tool definition for parameter validation.
    const starterKitInputSchemaShape = {
      use_case: z.string().min(5, { message: "Use case must be at least 5 characters." }).describe("The specific use case for the starter kit (e.g., 'E-commerce site', 'Blog platform')"),
      tech_stack_preferences: z.record(z.string().optional()).optional().describe("Optional tech stack preferences (e.g., { frontend: 'Vue', backend: 'Python' })"),
      request_recommendation: z.boolean().optional().describe("Whether to request recommendations for tech stack components based on research"),
      include_optional_features: z.array(z.string()).optional().describe("Optional features to include (e.g., ['Docker', 'CI/CD'])")
  • Registers the tool definition with the tool registry using registerTool.
    registerTool(starterKitToolDefinition);
  • Schema for validating the composed starter kit definition JSON.
    export const starterKitDefinitionSchema = z.object({
      projectName: z.string().min(1),
      description: z.string(),
      techStack: z.record(techStackItemSchema),
      directoryStructure: z.array(fileStructureItemSchema), // Uses the exported recursive type
      dependencies: z.object({
        npm: z.object({
          root: z.object({
            dependencies: z.record(z.string()).optional(),
            devDependencies: z.record(z.string()).optional(),
          }).optional(),
        }).catchall(z.object({ // Allows for sub-directories like 'client', 'server'
          dependencies: z.record(z.string()).optional(),
          devDependencies: z.record(z.string()).optional(),
        })).optional(),
        // Potentially add other package managers here like 'yarn', 'pnpm'
      }),
      setupCommands: z.array(z.string()),
      nextSteps: z.array(z.string()),
    });
  • src/server.ts:293-395 (registration)
    Dynamic registration loop that registers all tools from the registry, including the starter kit tool, with the MCP server.
    for (const definition of allToolDefinitions) {
      logger.debug(`Registering tool "${definition.name}" with MCP server.`);
      server.tool(
        definition.name,
        definition.description,
        // Pass the raw shape directly, as expected by server.tool
        definition.inputSchema,
        // The handler now integrates state management
        async (params: Record<string, unknown>, extra?: unknown): Promise<CallToolResult> => {
          // Log the config object available within this closure
          logger.debug({ configInHandler: loadedConfigParam }, 'Tool handler closure using config object.'); // Use loadedConfigParam
    
          // --- Context Creation START ---
          // Extract session ID from extra or generate a unique one
          let sessionId = 'placeholder-session-id';
          let transportType = 'unknown';
    
          // Check if extra contains transport information
          if (extra && typeof extra === 'object') {
            // Try to get session ID from extra
            if ('sessionId' in extra && typeof extra.sessionId === 'string') {
              sessionId = extra.sessionId;
            } else if ('req' in extra && extra.req && typeof extra.req === 'object') {
              // Try to get session ID from request
              const req = extra.req as {
                query?: { sessionId?: string },
                body?: { session_id?: string },
                headers?: { 'x-session-id'?: string }
              };
              if (req.query && req.query.sessionId) {
                sessionId = req.query.sessionId as string;
              } else if (req.body && req.body.session_id) {
                sessionId = req.body.session_id as string;
              } else if (req.headers && req.headers['x-session-id']) {
                sessionId = req.headers['x-session-id'] as string;
              }
            }
    
            // Try to get transport type from extra
            if ('transportType' in extra && typeof extra.transportType === 'string') {
              transportType = extra.transportType;
            }
          }
    
          // If we still have the placeholder, generate a unique ID for stdio transport
          if (sessionId === 'placeholder-session-id') {
            // For stdio transport, use a fixed session ID with a prefix
            sessionId = 'stdio-session';
            transportType = 'stdio';
            logger.warn({ toolName: definition.name }, "Using stdio session ID. SSE notifications will be limited to polling.");
          }
    
          const context: ToolExecutionContext = {
            sessionId,
            transportType
          };
    
          logger.debug({ toolName: definition.name, sessionId: context.sessionId, transportType: context.transportType }, "Server handler executing tool with context");
          // --- Context Creation END ---
    
          // Create a fresh deep copy specifically for this execution to prevent closure/reference issues
          let executionConfig: OpenRouterConfig;
          try {
            // Ensure loadedConfigParam and its llm_mapping are handled correctly during copy
            const configToCopy = {
              ...loadedConfigParam,
              llm_mapping: loadedConfigParam.llm_mapping || {} // Ensure mapping exists before stringify
            };
            executionConfig = JSON.parse(JSON.stringify(configToCopy));
            logger.debug({ configForExecution: executionConfig }, 'Deep copied config for executeTool call.');
          } catch (copyError) {
            logger.error({ err: copyError }, 'Failed to deep copy config in handler. Using original reference (may cause issues).');
            executionConfig = loadedConfigParam; // Fallback, but log error
          }
    
    
          // Execute the tool, passing the created context and the *freshly copied* config
          const result = await executeTool(definition.name, params, executionConfig, context);
    
          // --- State Management Integration START (Keep this part for now) ---
    
          // Store the current interaction (tool call + response)
          // Ensure 'result' has a timestamp - add it if executeTool doesn't
          const responseWithTimestamp = {
              ...result,
              timestamp: Date.now(),
          };
          addInteraction(sessionId, {
              toolCall: {
                  name: definition.name,
                  params: params,
                  // Using current time as message timestamp isn't available on 'extra'
                  timestamp: Date.now()
              },
              response: responseWithTimestamp,
          });
    
          // --- State Management Integration END ---
    
          return result; // Return the result from the tool execution
        }
      );
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions what the tool generates, it doesn't describe important behavioral aspects like whether this creates files/directories, requires specific permissions, has rate limits, or what the output looks like. For a generation tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently communicates the tool's core functionality without unnecessary words. It's front-loaded with the main purpose and includes three key features in a parallel structure, making every element earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (generating full-stack projects with multiple parameters) and the absence of both annotations and output schema, the description provides adequate but incomplete context. It covers what the tool does but lacks details about behavioral aspects and output format that would be helpful for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all four parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, but it does provide context about what the tool generates overall. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('generates') and resources ('full-stack project starter kits'), and distinguishes it from siblings by specifying custom tech stacks, research-informed recommendations, and setup scripts. It goes beyond just restating the name to explain what the tool actually produces.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through phrases like 'with custom tech stacks' and 'research-informed recommendations', suggesting when this tool might be appropriate. However, it doesn't explicitly state when to use it versus alternatives like 'generate-code-stub' or 'generate-prd' among the sibling tools, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/freshtechbro/vibe-coder-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server