Skip to main content
Glama

generateContextPrompt

Creates detailed prompts summarizing plan status, including goals and actionable tasks, to provide context for AI decision-making in task management systems.

Instructions

生成一个详细的文本提示,总结计划的当前状态。 这个提示可以作为上下文提供给AI模型,以帮助其决定下一步行动。 内容包括:总体目标、当前任务、可执行任务列表等。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler and registration for 'generateContextPrompt'. Instantiates DependencyPromptGenerator with the global plan_manager and delegates to its generate_context_prompt() method to produce the prompt string.
    @mcp.tool()
    def generateContextPrompt() -> str:
        """
        生成一个详细的文本提示,总结计划的当前状态。
        这个提示可以作为上下文提供给AI模型,以帮助其决定下一步行动。
        内容包括:总体目标、当前任务、可执行任务列表等。
        """
        from .dependency_tools import DependencyPromptGenerator
        generator = DependencyPromptGenerator(plan_manager)
        prompt = generator.generate_context_prompt()
        return prompt
  • Core implementation of the context prompt generation within DependencyPromptGenerator class. Gathers plan status, current task, executable tasks, dependencies, and suggestions to build a comprehensive markdown-formatted prompt string.
    def generate_context_prompt(self) -> str:
        """生成上下文感知的提示词"""
        plan_status = self.pm.getPlanStatus()
        if not plan_status["success"]:
            return "Error: Could not get plan status"
        
        dump_result = self.pm.dumpPlan()
        if not dump_result.get("success"):
            return "Error: Could not dump plan data"
        plan_data = dump_result["data"]
        
        goal = plan_data["meta"]["goal"]
        tasks = plan_data["tasks"]
        state = plan_data["state"]
        
        prompt_parts = [
            "# 任务执行上下文",
            f"## 总体目标\n{goal}",
            "",
            "## 当前状态"
        ]
        
        # 当前任务信息
        current_task_response = self.pm.getCurrentTask()
        if current_task_response["success"]:
            task = current_task_response["data"]
            prompt_parts.extend([
                f"- 当前执行任务: [{task['id']}] {task['name']}",
                f"- 任务状态: {task['status']}",
                f"- 执行理由: {task['reasoning']}"
            ])
        else:
            prompt_parts.append("- 当前没有活动任务")
        
        # 可执行任务
        executable = self.pm.getExecutableTaskList()
        if executable["success"] and len(executable["data"]) > 0:
            prompt_parts.append("\n## 可执行任务")
            for task in executable["data"]:
                prompt_parts.append(f"- [{task['id']}] {task['name']}")
        
        # 任务依赖关系
        prompt_parts.extend([
            "",
            "## 任务依赖关系",
            self._generate_dependency_text(tasks)
        ])
        
        # 执行建议
        prompt_parts.extend([
            "",
            "## 执行建议",
            self._generate_execution_suggestions(tasks, state)
        ])
        
        return "\n".join(prompt_parts)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/donway19/MCPlanManager'

If you have feedback or need assistance with the MCP directory API, please join our Discord server