Skip to main content
Glama

generate_project_plan

Create structured project plans and tasks using an LLM by analyzing prompts and attached files. Supports multiple providers and models for tailored outputs.

Instructions

Use an LLM to generate a project plan and tasks from a prompt. The LLM will analyze the prompt and any attached files to create a structured project plan.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
attachmentsNoOptional array of paths to files to attach as context. There is no need to read the files before calling this tool!
modelYesThe specific model to use (e.g., 'gpt-4-turbo' for OpenAI).
promptYesThe prompt text or file path to use for generating the project plan.
providerYesThe LLM provider to use (requires corresponding API key to be set).

Implementation Reference

  • Tool definition and input schema for 'generate_project_plan', specifying parameters like prompt, provider, model, and optional attachments.
    const generateProjectPlanTool: Tool = { name: "generate_project_plan", description: "Use an LLM to generate a project plan and tasks from a prompt. The LLM will analyze the prompt and any attached files to create a structured project plan.", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The prompt text or file path to use for generating the project plan.", }, provider: { type: "string", enum: ["openai", "google", "deepseek"], description: "The LLM provider to use (requires corresponding API key to be set).", }, model: { type: "string", description: "The specific model to use (e.g., 'gpt-4-turbo' for OpenAI).", }, attachments: { type: "array", items: { type: "string", }, description: "Optional array of paths to files to attach as context. There is no need to read the files before calling this tool!", }, }, required: ["prompt", "provider", "model"], }, };
  • MCP tool executor for 'generate_project_plan': performs input validation on arguments and delegates execution to TaskManager.generateProjectPlan method.
    const generateProjectPlanToolExecutor: ToolExecutor = { name: "generate_project_plan", async execute(taskManager, args) { // 1. Argument Validation const prompt = validateRequiredStringParam(args.prompt, "prompt"); const provider = validateRequiredStringParam(args.provider, "provider"); const model = validateRequiredStringParam(args.model, "model"); // Validate optional attachments let attachments: string[] = []; if (args.attachments !== undefined) { if (!Array.isArray(args.attachments)) { throw new AppError( "Invalid attachments: must be an array of strings", AppErrorCode.InvalidArgument ); } attachments = args.attachments.map((att, index) => { if (typeof att !== "string") { throw new AppError( `Invalid attachment at index ${index}: must be a string`, AppErrorCode.InvalidArgument ); } return att; }); } // 2. Core Logic Execution const resultData = await taskManager.generateProjectPlan({ prompt, provider, model, attachments, }); // 3. Return raw success data return resultData; }, };
  • Registers the generateProjectPlanToolExecutor in the toolExecutorMap used by executeToolAndHandleErrors to dispatch tool calls.
    toolExecutorMap.set(generateProjectPlanToolExecutor.name, generateProjectPlanToolExecutor);
  • Includes generateProjectPlanTool in the ALL_TOOLS export array for MCP tool listing.
    generateProjectPlanTool,
  • Core implementation of the tool logic: reads attachments, builds LLM prompt with schema, dynamically imports AI SDK provider based on input, generates structured project plan and tasks via generateObject, creates project, and handles provider-specific errors like missing API keys or invalid models.
    public async generateProjectPlan({ prompt, provider, model, attachments, }: { prompt: string; provider: string; model: string; attachments: string[]; }): Promise<ProjectCreationSuccessData> { await this.ensureInitialized(); // Read all attachment files const attachmentContents: string[] = []; for (const filename of attachments) { try { const content = await this.fileSystemService.readAttachmentFile(filename); attachmentContents.push(content); } catch (error) { throw new AppError(`Failed to read attachment file: ${filename}`, AppErrorCode.FileReadError, error); } } // Define the schema for the LLM's response using jsonSchema helper const projectPlanSchema = jsonSchema<ProjectPlanOutput>({ type: "object", properties: { projectPlan: { type: "string" }, tasks: { type: "array", items: { type: "object", properties: { title: { type: "string" }, description: { type: "string" }, toolRecommendations: { type: "string" }, ruleRecommendations: { type: "string" }, }, required: ["title", "description"], }, }, }, required: ["tasks"], }); // Wrap prompt and attachments in XML tags let llmPrompt = `<prompt>${prompt}</prompt>`; llmPrompt += `\n<outputFormat>Return your output as JSON formatted according to the following schema: ${JSON.stringify(projectPlanSchema, null, 2)}</outputFormat>` for (const content of attachmentContents) { llmPrompt += `\n<attachment>${content}</attachment>`; } // Import and configure the appropriate provider let modelProvider; switch (provider) { case "openai": const { openai } = await import("@ai-sdk/openai"); modelProvider = openai(model); break; case "google": const { google } = await import("@ai-sdk/google"); modelProvider = google(model); break; case "deepseek": const { deepseek } = await import("@ai-sdk/deepseek"); modelProvider = deepseek(model); break; default: throw new AppError(`Invalid provider: ${provider}`, AppErrorCode.InvalidProvider); } try { const { object } = await generateObject({ model: modelProvider, schema: projectPlanSchema, prompt: llmPrompt, }); return await this.createProject(prompt, object.tasks, object.projectPlan); } catch (err: any) { if (err.name === 'LoadAPIKeyError' || err.message.includes('API key is missing') || err.message.includes('You didn\'t provide an API key') || err.message.includes('unregistered callers') || (err.responseBody && err.responseBody.includes('Authentication Fails'))) { throw new AppError( `Missing API key environment variable required for ${provider}`, AppErrorCode.ConfigurationError, err ); } // Check for invalid model errors by looking at the error code, type, and message if ((err.data?.error?.code === 'model_not_found') && err.message.includes('model')) { throw new AppError( `Invalid model: ${model} is not available for ${provider}`, AppErrorCode.InvalidModel, err ); } // For unknown errors, preserve the original error but wrap it throw new AppError( "Failed to generate project plan due to an unexpected error", AppErrorCode.LLMGenerationError, err ); } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chriscarrollsmith/taskqueue-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server