Skip to main content
Glama
LZMW

Aurai Advisor (上级顾问 MCP)

by LZMW

report_progress

Report execution results and receive next-step guidance after implementing AI recommendations. Submit actions taken, outcomes, and feedback to continue problem-solving workflows.

Instructions

报告执行进度,请求下一步指导

在执行了上级AI的建议后,调用此工具报告结果,获取下一步指导。


使用场景:执行上级 AI 建议后,报告执行结果并获取后续指导 参数:actions_taken(已执行的行动)、result(success/failed/partial)、new_error(新错误)、feedback(反馈)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actions_takenYes已执行的行动
resultYes执行结果: success, failed, partial
new_errorNo新的错误信息
feedbackNo执行反馈

Implementation Reference

  • The main handler function for 'report_progress' tool. It reports execution progress after following上级AI's suggestions, checks iteration limits, builds a prompt using build_progress_prompt helper, calls the上级AI client, and returns guidance for next steps. Registered via @mcp.tool() decorator.
    @mcp.tool()
    async def report_progress(
        actions_taken: str = Field(description="已执行的行动"),
        result: str = Field(description="执行结果: success, failed, partial"),
        new_error: str | None = Field(default=None, description="新的错误信息"),
        feedback: str | None = Field(default=None, description="执行反馈"),
    ) -> dict[str, Any]:
        """
        报告执行进度,请求下一步指导
    
        在执行了上级AI的建议后,调用此工具报告结果,获取下一步指导。
    
        ---
        **使用场景**:执行上级 AI 建议后,报告执行结果并获取后续指导
        **参数**:actions_taken(已执行的行动)、result(success/failed/partial)、new_error(新错误)、feedback(反馈)
        """
        config = get_aurai_config()
    
        # 检查迭代次数
        iteration = len(_conversation_history)
        if iteration >= config.max_iterations:
            logger.warning(f"达到最大迭代次数 ({config.max_iterations}),请求人工介入")
            return {
                "analysis": f"已达到最大迭代次数 ({config.max_iterations})",
                "guidance": "建议人工介入检查问题",
                "action_items": ["请人工审查当前状态"],
                "needs_another_iteration": False,
                "resolved": False,
                "requires_human_intervention": True,
            }
    
        logger.info(f"收到report_progress请求,结果: {result}")
    
        # 构建提示词
        prompt = build_progress_prompt(
            iteration=iteration,
            actions_taken=actions_taken,
            result=result,
            new_error=new_error,
            feedback=feedback,
            conversation_history=_get_history(),
        )
    
        # 调用上级AI,传递对话历史
        client = get_aurai_client()
        response = await client.chat(
            user_message=prompt,
            conversation_history=_get_history()
        )
    
        # 记录到历史
        _add_to_history({
            "type": "progress",
            "actions_taken": actions_taken,
            "result": result,
            "new_error": new_error,
            "feedback": feedback,
            "response": response,
        })
    
        # 检查问题是否已解决,若解决则清空对话历史
        if response.get("resolved", False):
            history_count = len(_conversation_history)
            _conversation_history.clear()
            logger.info(f"[完成] 问题已解决,已清空对话历史(清除了 {history_count} 条记录)")
    
        logger.info(f"report_progress完成,resolved: {response.get('resolved', False)}")
        return response
  • Input schema definition using Pydantic Field descriptors: actions_taken (required string), result (required string - success/failed/partial), new_error (optional string), feedback (optional string). These define the tool's input parameters and their types.
        actions_taken: str = Field(description="已执行的行动"),
        result: str = Field(description="执行结果: success, failed, partial"),
        new_error: str | None = Field(default=None, description="新的错误信息"),
        feedback: str | None = Field(default=None, description="执行反馈"),
    ) -> dict[str, Any]:
  • Helper function build_progress_prompt() that constructs the prompt for reporting progress. Takes iteration number, actions_taken, result, new_error, feedback, and conversation_history as parameters, and returns a formatted prompt string with execution details and guidance request.
    def build_progress_prompt(
        iteration: int,
        actions_taken: str,
        result: str,
        new_error: str | None = None,
        feedback: str | None = None,
        conversation_history: list[dict[str, str]] | None = None,
    ) -> str:
        """
        构建报告进度的提示词
    
        Args:
            iteration: 迭代次数
            actions_taken: 已执行的行动
            result: 执行结果 (success | failed | partial)
            new_error: 新的错误信息
            feedback: 执行反馈
            conversation_history: 对话历史
        """
        # 构建对话历史
        history_desc = ""
        if conversation_history:
            history_desc = "\n## 对话历史\n\n"
            for i, turn in enumerate(conversation_history[-5:], 1):
                history_desc += f"### 第{i}轮\n"
                if "action" in turn:
                    history_desc += f"**执行操作**: {turn['action']}\n"
                if "result" in turn:
                    history_desc += f"**执行结果**: {turn['result']}\n"
                history_desc += "\n"
    
        prompt = f"""# 进度报告
    
    ## 执行情况
    - **迭代轮次**: 第 {iteration + 1} 轮
    - **执行操作**: {actions_taken}
    - **执行结果**: {result}
    {f'- **新错误**: {new_error}' if new_error else ''}
    {f'- **反馈**: {feedback}' if feedback else ''}
    {history_desc}
    
    ## 请判断
    
    1. 问题是否已解决?
    2. 是否需要继续尝试其他方案?
    3. 是否需要人工介入?
    
    请按照之前的JSON格式回应,给出下一步指导。
    """
        return prompt

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LZMW/mcp-aurai-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server