Skip to main content
Glama

generate_project_plan

Create structured project plans and tasks using an LLM by analyzing prompts and attached files. Supports multiple providers and models for tailored outputs.

Instructions

Use an LLM to generate a project plan and tasks from a prompt. The LLM will analyze the prompt and any attached files to create a structured project plan.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
attachmentsNoOptional array of paths to files to attach as context. There is no need to read the files before calling this tool!
modelYesThe specific model to use (e.g., 'gpt-4-turbo' for OpenAI).
promptYesThe prompt text or file path to use for generating the project plan.
providerYesThe LLM provider to use (requires corresponding API key to be set).

Implementation Reference

  • Tool definition and input schema for 'generate_project_plan', specifying parameters like prompt, provider, model, and optional attachments.
    const generateProjectPlanTool: Tool = {
      name: "generate_project_plan",
      description: "Use an LLM to generate a project plan and tasks from a prompt. The LLM will analyze the prompt and any attached files to create a structured project plan.",
      inputSchema: {
        type: "object",
        properties: {
          prompt: {
            type: "string",
            description: "The prompt text or file path to use for generating the project plan.",
          },
          provider: {
            type: "string",
            enum: ["openai", "google", "deepseek"],
            description: "The LLM provider to use (requires corresponding API key to be set).",
          },
          model: {
            type: "string",
            description: "The specific model to use (e.g., 'gpt-4-turbo' for OpenAI).",
          },
          attachments: {
            type: "array",
            items: {
              type: "string",
            },
            description: "Optional array of paths to files to attach as context. There is no need to read the files before calling this tool!",
          },
        },
        required: ["prompt", "provider", "model"],
      },
    };
  • MCP tool executor for 'generate_project_plan': performs input validation on arguments and delegates execution to TaskManager.generateProjectPlan method.
    const generateProjectPlanToolExecutor: ToolExecutor = {
      name: "generate_project_plan",
      async execute(taskManager, args) {
        // 1. Argument Validation
        const prompt = validateRequiredStringParam(args.prompt, "prompt");
        const provider = validateRequiredStringParam(args.provider, "provider");
        const model = validateRequiredStringParam(args.model, "model");
    
        // Validate optional attachments
        let attachments: string[] = [];
        if (args.attachments !== undefined) {
          if (!Array.isArray(args.attachments)) {
            throw new AppError(
              "Invalid attachments: must be an array of strings",
              AppErrorCode.InvalidArgument
            );
          }
          attachments = args.attachments.map((att, index) => {
            if (typeof att !== "string") {
              throw new AppError(
                `Invalid attachment at index ${index}: must be a string`,
                AppErrorCode.InvalidArgument
              );
            }
            return att;
          });
        }
    
        // 2. Core Logic Execution
        const resultData = await taskManager.generateProjectPlan({
          prompt,
          provider,
          model,
          attachments,
        });
    
        // 3. Return raw success data
        return resultData;
      },
    };
  • Registers the generateProjectPlanToolExecutor in the toolExecutorMap used by executeToolAndHandleErrors to dispatch tool calls.
    toolExecutorMap.set(generateProjectPlanToolExecutor.name, generateProjectPlanToolExecutor);
  • Includes generateProjectPlanTool in the ALL_TOOLS export array for MCP tool listing.
    generateProjectPlanTool,
  • Core implementation of the tool logic: reads attachments, builds LLM prompt with schema, dynamically imports AI SDK provider based on input, generates structured project plan and tasks via generateObject, creates project, and handles provider-specific errors like missing API keys or invalid models.
    public async generateProjectPlan({
      prompt,
      provider,
      model,
      attachments,
    }: {
      prompt: string;
      provider: string;
      model: string;
      attachments: string[];
    }): Promise<ProjectCreationSuccessData> {
      await this.ensureInitialized();
    
      // Read all attachment files
      const attachmentContents: string[] = [];
      for (const filename of attachments) {
        try {
          const content = await this.fileSystemService.readAttachmentFile(filename);
          attachmentContents.push(content);
        } catch (error) {
          throw new AppError(`Failed to read attachment file: ${filename}`, AppErrorCode.FileReadError, error);
        }
      }
    
      // Define the schema for the LLM's response using jsonSchema helper
      const projectPlanSchema = jsonSchema<ProjectPlanOutput>({
        type: "object",
        properties: {
          projectPlan: { type: "string" },
          tasks: {
            type: "array",
            items: {
              type: "object",
              properties: {
                title: { type: "string" },
                description: { type: "string" },
                toolRecommendations: { type: "string" },
                ruleRecommendations: { type: "string" },
              },
              required: ["title", "description"],
            },
          },
        },
        required: ["tasks"],
      });
    
      // Wrap prompt and attachments in XML tags
      let llmPrompt = `<prompt>${prompt}</prompt>`;
      llmPrompt += `\n<outputFormat>Return your output as JSON formatted according to the following schema: ${JSON.stringify(projectPlanSchema, null, 2)}</outputFormat>`
      for (const content of attachmentContents) {
        llmPrompt += `\n<attachment>${content}</attachment>`;
      }
    
      // Import and configure the appropriate provider
      let modelProvider;
      switch (provider) {
        case "openai":
          const { openai } = await import("@ai-sdk/openai");
          modelProvider = openai(model);
          break;
        case "google":
          const { google } = await import("@ai-sdk/google");
          modelProvider = google(model);
          break;
        case "deepseek":
          const { deepseek } = await import("@ai-sdk/deepseek");
          modelProvider = deepseek(model);
          break;
        default:
          throw new AppError(`Invalid provider: ${provider}`, AppErrorCode.InvalidProvider);
      }
    
      try {
        const { object } = await generateObject({
          model: modelProvider,
          schema: projectPlanSchema,
          prompt: llmPrompt,
        });
        return await this.createProject(prompt, object.tasks, object.projectPlan);
      } catch (err: any) {
        if (err.name === 'LoadAPIKeyError' || 
            err.message.includes('API key is missing') || 
            err.message.includes('You didn\'t provide an API key') ||
            err.message.includes('unregistered callers') ||
            (err.responseBody && err.responseBody.includes('Authentication Fails'))) {
          throw new AppError(
            `Missing API key environment variable required for ${provider}`,
            AppErrorCode.ConfigurationError,
            err
          );
        }
        // Check for invalid model errors by looking at the error code, type, and message
        if ((err.data?.error?.code === 'model_not_found') && 
            err.message.includes('model')) {
          throw new AppError(
            `Invalid model: ${model} is not available for ${provider}`,
            AppErrorCode.InvalidModel,
            err
          );
        }
        // For unknown errors, preserve the original error but wrap it
        throw new AppError(
          "Failed to generate project plan due to an unexpected error",
          AppErrorCode.LLMGenerationError,
          err
        );
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions using an LLM and analyzing files, but fails to describe critical traits: it doesn't specify if this is a read-only or mutating operation, what the output format looks like, potential rate limits, error conditions, or costs. For a tool that likely involves external API calls and file processing, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly address the tool's function. It's front-loaded with the core purpose and avoids unnecessary details. However, it could be slightly more structured by explicitly separating the tool's action from its inputs or constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of an LLM-based generation tool with file attachments and no output schema, the description is incomplete. It lacks information on the output format (e.g., structured plan vs. raw text), error handling, dependencies like API keys, and how it integrates with sibling tools. This makes it inadequate for an agent to use effectively without guesswork.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema—it implies that 'attachments' are used as context and that the LLM analyzes the 'prompt', but doesn't provide additional syntax or format details. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Use an LLM to generate a project plan and tasks from a prompt.' It specifies the verb ('generate'), resource ('project plan and tasks'), and mechanism ('LLM'). However, it doesn't explicitly differentiate from siblings like 'create_project' or 'add_tasks_to_project' beyond the LLM aspect, which is why it doesn't reach a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions analyzing 'prompt and any attached files' but doesn't clarify scenarios where this is preferred over manual creation with 'create_project' or 'add_tasks_to_project', nor does it mention prerequisites like API keys. This leaves the agent with insufficient context for decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chriscarrollsmith/taskqueue-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server