Skip to main content
Glama
LZMW

Aurai Advisor (上级顾问 MCP)

by LZMW

report_progress

Report execution results and receive next-step guidance after implementing AI recommendations. Submit actions taken, outcomes, and feedback to continue problem-solving workflows.

Instructions

报告执行进度,请求下一步指导

在执行了上级AI的建议后,调用此工具报告结果,获取下一步指导。


使用场景:执行上级 AI 建议后,报告执行结果并获取后续指导 参数:actions_taken(已执行的行动)、result(success/failed/partial)、new_error(新错误)、feedback(反馈)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actions_takenYes已执行的行动
resultYes执行结果: success, failed, partial
new_errorNo新的错误信息
feedbackNo执行反馈

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for 'report_progress' tool. It reports execution progress after following上级AI's suggestions, checks iteration limits, builds a prompt using build_progress_prompt helper, calls the上级AI client, and returns guidance for next steps. Registered via @mcp.tool() decorator.
    @mcp.tool()
    async def report_progress(
        actions_taken: str = Field(description="已执行的行动"),
        result: str = Field(description="执行结果: success, failed, partial"),
        new_error: str | None = Field(default=None, description="新的错误信息"),
        feedback: str | None = Field(default=None, description="执行反馈"),
    ) -> dict[str, Any]:
        """
        报告执行进度,请求下一步指导
    
        在执行了上级AI的建议后,调用此工具报告结果,获取下一步指导。
    
        ---
        **使用场景**:执行上级 AI 建议后,报告执行结果并获取后续指导
        **参数**:actions_taken(已执行的行动)、result(success/failed/partial)、new_error(新错误)、feedback(反馈)
        """
        config = get_aurai_config()
    
        # 检查迭代次数
        iteration = len(_conversation_history)
        if iteration >= config.max_iterations:
            logger.warning(f"达到最大迭代次数 ({config.max_iterations}),请求人工介入")
            return {
                "analysis": f"已达到最大迭代次数 ({config.max_iterations})",
                "guidance": "建议人工介入检查问题",
                "action_items": ["请人工审查当前状态"],
                "needs_another_iteration": False,
                "resolved": False,
                "requires_human_intervention": True,
            }
    
        logger.info(f"收到report_progress请求,结果: {result}")
    
        # 构建提示词
        prompt = build_progress_prompt(
            iteration=iteration,
            actions_taken=actions_taken,
            result=result,
            new_error=new_error,
            feedback=feedback,
            conversation_history=_get_history(),
        )
    
        # 调用上级AI,传递对话历史
        client = get_aurai_client()
        response = await client.chat(
            user_message=prompt,
            conversation_history=_get_history()
        )
    
        # 记录到历史
        _add_to_history({
            "type": "progress",
            "actions_taken": actions_taken,
            "result": result,
            "new_error": new_error,
            "feedback": feedback,
            "response": response,
        })
    
        # 检查问题是否已解决,若解决则清空对话历史
        if response.get("resolved", False):
            history_count = len(_conversation_history)
            _conversation_history.clear()
            logger.info(f"[完成] 问题已解决,已清空对话历史(清除了 {history_count} 条记录)")
    
        logger.info(f"report_progress完成,resolved: {response.get('resolved', False)}")
        return response
  • Input schema definition using Pydantic Field descriptors: actions_taken (required string), result (required string - success/failed/partial), new_error (optional string), feedback (optional string). These define the tool's input parameters and their types.
        actions_taken: str = Field(description="已执行的行动"),
        result: str = Field(description="执行结果: success, failed, partial"),
        new_error: str | None = Field(default=None, description="新的错误信息"),
        feedback: str | None = Field(default=None, description="执行反馈"),
    ) -> dict[str, Any]:
  • Helper function build_progress_prompt() that constructs the prompt for reporting progress. Takes iteration number, actions_taken, result, new_error, feedback, and conversation_history as parameters, and returns a formatted prompt string with execution details and guidance request.
    def build_progress_prompt(
        iteration: int,
        actions_taken: str,
        result: str,
        new_error: str | None = None,
        feedback: str | None = None,
        conversation_history: list[dict[str, str]] | None = None,
    ) -> str:
        """
        构建报告进度的提示词
    
        Args:
            iteration: 迭代次数
            actions_taken: 已执行的行动
            result: 执行结果 (success | failed | partial)
            new_error: 新的错误信息
            feedback: 执行反馈
            conversation_history: 对话历史
        """
        # 构建对话历史
        history_desc = ""
        if conversation_history:
            history_desc = "\n## 对话历史\n\n"
            for i, turn in enumerate(conversation_history[-5:], 1):
                history_desc += f"### 第{i}轮\n"
                if "action" in turn:
                    history_desc += f"**执行操作**: {turn['action']}\n"
                if "result" in turn:
                    history_desc += f"**执行结果**: {turn['result']}\n"
                history_desc += "\n"
    
        prompt = f"""# 进度报告
    
    ## 执行情况
    - **迭代轮次**: 第 {iteration + 1} 轮
    - **执行操作**: {actions_taken}
    - **执行结果**: {result}
    {f'- **新错误**: {new_error}' if new_error else ''}
    {f'- **反馈**: {feedback}' if feedback else ''}
    {history_desc}
    
    ## 请判断
    
    1. 问题是否已解决?
    2. 是否需要继续尝试其他方案?
    3. 是否需要人工介入?
    
    请按照之前的JSON格式回应,给出下一步指导。
    """
        return prompt
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's purpose and workflow (reporting progress and requesting guidance), though it doesn't specify technical details like response format, rate limits, or authentication requirements. However, it clearly communicates the tool's interactive nature and expected usage pattern.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It opens with a clear purpose statement, provides usage guidelines, and includes a formatted section with usage scenarios and parameters. Every sentence serves a purpose with no redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, interactive workflow) and the presence of an output schema (which handles return values), the description is largely complete. It covers purpose, usage context, and parameters adequately. The main gap is lack of behavioral details like error handling or response structure, but the output schema mitigates this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description lists parameters in a section but doesn't add meaningful semantic context beyond what's in the schema (e.g., explaining how 'result' influences guidance or what constitutes good 'feedback'). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('报告执行进度' - report execution progress, '请求下一步指导' - request next-step guidance) and distinguishes it from siblings like consult_aurai (consultation), get_status (status retrieval), and sync_context (context synchronization). It explicitly defines the tool's role in reporting results after executing superior AI suggestions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: '在执行了上级AI的建议后,调用此工具报告结果,获取下一步指导' (After executing superior AI suggestions, call this tool to report results and get next-step guidance). It clearly defines when to use this tool (post-execution reporting) versus alternatives like consult_aurai (for consultation before execution) or get_status (for status checking without guidance).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LZMW/mcp-aurai-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server