Skip to main content
Glama
LZMW

Aurai Advisor (上级顾问 MCP)

by LZMW

consult_aurai

Get expert AI guidance for programming problems through multi-turn dialogue. Submit error details and code context to receive analysis and step-by-step solutions.

Instructions

请求上级AI的指导(支持交互对齐机制与多轮对话)

这是核心工具,当本地AI遇到编程问题时调用此工具获取上级AI的指导建议。


🔗 相关工具

  • sync_context:需要上传文档或代码时使用

    • 📄 上传文章、说明文档(.md/.txt)

    • 💻 上传代码文件(避免内容被截断) ⭐ 重要

    • .py/.js/.json 等代码文件复制为 .txt 后上传

  • report_progress:执行上级 AI 建议后,使用此工具报告进度并获取下一步指导

  • get_status:查看当前对话状态、迭代次数、配置信息

💡 重要提示:避免内容被截断

如果 code_snippetcontext 内容过长,请使用 sync_context 上传文件

# 步骤 1:将代码文件复制为 .txt
shutil.copy('script.py', 'script.txt')

# 步骤 2:上传文件
sync_context(operation='incremental', files=['script.txt'])

# 步骤 3:告诉上级顾问文件已上传
consult_aurai(
    error_message='请审查已上传的 script.txt 文件'
)

优势

  • ✅ 避免代码在 contextanswers_to_questions 字段中被截断

  • ✅ 利用文件读取机制,完整传递内容

  • ✅ 支持任意大小的代码文件


[重要] 何时开始新对话?

系统会自动检测,但你也可以手动控制:

  • 自动清空:当上一次对话返回 resolved=true 时,系统会自动清空历史

  • 手动清空:如果你要讨论一个完全不同的新问题,设置 is_new_question=true

何时设置 is_new_question=true

  • [OK] 切换到完全不相关的项目/文件

  • [OK] 之前的问题已解决,现在遇到全新的问题

  • [OK] 发现上下文混乱,想重新开始

  • [X] 不要在同一个问题的多轮对话中使用

交互协议

1. 多轮对齐机制

  • 不要期待一次成功:上级顾问可能会认为信息不足,返回反问问题

  • 仔细阅读 questions_to_answer 中的每个问题

  • 主动搜集信息(读取文件、检查日志、运行命令)

  • 再次调用 此工具,将答案填入 answers_to_questions 参数

2. 首次调用

必须提供:

  • problem_type:问题类型(runtime_error/syntax_error/design_issue/other)

  • error_message:清晰描述问题或错误

  • context:相关上下文(代码片段、环境信息、已尝试的方案)

  • code_snippet:相关代码(如果有)

3. 后续调用(当返回 status="need_info" 时)

必须提供:

  • answers_to_questions:对上级顾问反问的详细回答

  • 保持其他参数不变(除非有新信息)

4. 诚实原则

  • 禁止瞎编:如果不知道答案,诚实说明"未找到相关信息"

  • 禁止臆测:不要在没有证据的情况下假设解决方案

  • 提供具体证据(文件路径、日志内容、错误堆栈)

响应格式

信息不足时 (status="need_info")

{
  "status": "need_info",
  "questions_to_answer": ["问题1", "问题2"],
  "instruction": "请搜集信息并再次调用"
}

提供指导时 (status="success")

{
  "status": "success",
  "analysis": "问题分析",
  "guidance": "解决建议",
  "action_items": ["步骤1", "步骤2"],
  "resolved": false  // 是否已完全解决
}

问题解决后

resolved=true 时,对话历史会自动清空,下次查询将开始新对话。

[自动] 新对话检测

系统会自动检测新问题:

  • 如果上一次对话的 resolved=true,下次调用 consult_aurai 时会自动清空历史

  • 保证每个独立问题都有干净的上下文,避免干扰

[重要] 明确标注新问题(可选参数)

如果你想强制开始一个新对话,可以设置 is_new_question=true

  • 效果:立即清空所有之前的对话历史

  • 后果:上级AI将无法看到之前的任何上下文

  • 使用场景

    • 之前的对话已完全无关

    • 想重新开始讨论一个全新的问题

    • 发现上下文混乱,想重置

示例

# 第一次咨询(问题A)
consult_aurai(problem_type="runtime_error", error_message="...")

# 继续讨论问题A...
consult_aurai(answers_to_questions="...")

# 切换到问题B(标注为新问题,清空历史)
consult_aurai(
    problem_type="design_issue",
    error_message="...",
    is_new_question=True  # [注意] 会清空之前关于问题A的所有对话
)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
problem_typeYes问题类型: runtime_error, syntax_error, design_issue, other
error_messageYes错误描述
code_snippetNo相关代码片段
contextNo上下文信息(支持 JSON 字符串或字典,会自动解析)
attempts_madeNo已尝试的解决方案
answers_to_questionsNo对上级顾问反问的回答(仅在多轮对话时使用)
is_new_questionNo[重要] 是否为新问题(新问题会清空之前的所有对话历史,确保干净的上下文)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main handler function for consult_aurai tool - async function that processes consultation requests, manages conversation history, builds prompts, calls the AI client, and returns guidance or follow-up questions based on the AI's response
    async def consult_aurai(
        problem_type: str = Field(
            description="问题类型: runtime_error, syntax_error, design_issue, other"
        ),
        error_message: str = Field(description="错误描述"),
        code_snippet: str | None = Field(default=None, description="相关代码片段"),
        context: Any = Field(default=None, description="上下文信息(支持 JSON 字符串或字典,会自动解析)"),
        attempts_made: str | None = Field(default=None, description="已尝试的解决方案"),
        answers_to_questions: str | None = Field(
            default=None,
            description="对上级顾问反问的回答(仅在多轮对话时使用)"
        ),
        is_new_question: bool = Field(
            default=False,
            description="[重要] 是否为新问题(新问题会清空之前的所有对话历史,确保干净的上下文)"
        ),
    ) -> dict[str, Any]:
        """
        请求上级AI的指导(支持交互对齐机制与多轮对话)
    
        这是核心工具,当本地AI遇到编程问题时调用此工具获取上级AI的指导建议。
    
        ---
    
        **🔗 相关工具**
    
        - **sync_context**:需要上传文档或代码时使用
          - 📄 上传文章、说明文档(.md/.txt)
          - 💻 **上传代码文件(避免内容被截断)** ⭐ 重要
          - 将 `.py/.js/.json` 等代码文件复制为 `.txt` 后上传
    
        - **report_progress**:执行上级 AI 建议后,使用此工具报告进度并获取下一步指导
    
        - **get_status**:查看当前对话状态、迭代次数、配置信息
    
        **💡 重要提示:避免内容被截断**
    
        如果 `code_snippet` 或 `context` 内容过长,**请使用 `sync_context` 上传文件**:
    
        ```python
        # 步骤 1:将代码文件复制为 .txt
        shutil.copy('script.py', 'script.txt')
    
        # 步骤 2:上传文件
        sync_context(operation='incremental', files=['script.txt'])
    
        # 步骤 3:告诉上级顾问文件已上传
        consult_aurai(
            error_message='请审查已上传的 script.txt 文件'
        )
        ```
    
        **优势**:
        - ✅ 避免代码在 `context` 或 `answers_to_questions` 字段中被截断
        - ✅ 利用文件读取机制,完整传递内容
        - ✅ 支持任意大小的代码文件
    
        ---
    
        ## [重要] 何时开始新对话?
    
        **系统会自动检测**,但你也可以手动控制:
    
        - **自动清空**:当上一次对话返回 `resolved=true` 时,系统会自动清空历史
        - **手动清空**:如果你要讨论一个完全不同的新问题,设置 `is_new_question=true`
    
        **何时设置 `is_new_question=true`?**
        - [OK] 切换到完全不相关的项目/文件
        - [OK] 之前的问题已解决,现在遇到全新的问题
        - [OK] 发现上下文混乱,想重新开始
        - [X] 不要在同一个问题的多轮对话中使用
    
        ## 交互协议
    
        ### 1. 多轮对齐机制
        - **不要期待一次成功**:上级顾问可能会认为信息不足,返回反问问题
        - 仔细阅读 `questions_to_answer` 中的每个问题
        - 主动搜集信息(读取文件、检查日志、运行命令)
        - **再次调用** 此工具,将答案填入 `answers_to_questions` 参数
    
        ### 2. 首次调用
        必须提供:
        - `problem_type`:问题类型(runtime_error/syntax_error/design_issue/other)
        - `error_message`:清晰描述问题或错误
        - `context`:相关上下文(代码片段、环境信息、已尝试的方案)
        - `code_snippet`:相关代码(如果有)
    
        ### 3. 后续调用(当返回 status="need_info" 时)
        必须提供:
        - `answers_to_questions`:对上级顾问反问的详细回答
        - 保持其他参数不变(除非有新信息)
    
        ### 4. 诚实原则
        - **禁止瞎编**:如果不知道答案,诚实说明"未找到相关信息"
        - **禁止臆测**:不要在没有证据的情况下假设解决方案
        - 提供具体证据(文件路径、日志内容、错误堆栈)
    
        ## 响应格式
    
        ### 信息不足时 (status="need_info")
        ```json
        {
          "status": "need_info",
          "questions_to_answer": ["问题1", "问题2"],
          "instruction": "请搜集信息并再次调用"
        }
        ```
    
        ### 提供指导时 (status="success")
        ```json
        {
          "status": "success",
          "analysis": "问题分析",
          "guidance": "解决建议",
          "action_items": ["步骤1", "步骤2"],
          "resolved": false  // 是否已完全解决
        }
        ```
    
        ### 问题解决后
        当 `resolved=true` 时,对话历史会自动清空,下次查询将开始新对话。
    
        ### [自动] 新对话检测
        系统会自动检测新问题:
        - 如果上一次对话的 `resolved=true`,下次调用 `consult_aurai` 时会自动清空历史
        - 保证每个独立问题都有干净的上下文,避免干扰
    
        ### [重要] 明确标注新问题(可选参数)
        如果你想强制开始一个新对话,可以设置 `is_new_question=true`:
        - **效果**:立即清空所有之前的对话历史
        - **后果**:上级AI将无法看到之前的任何上下文
        - **使用场景**:
          - 之前的对话已完全无关
          - 想重新开始讨论一个全新的问题
          - 发现上下文混乱,想重置
    
        **示例**:
        ```python
        # 第一次咨询(问题A)
        consult_aurai(problem_type="runtime_error", error_message="...")
    
        # 继续讨论问题A...
        consult_aurai(answers_to_questions="...")
    
        # 切换到问题B(标注为新问题,清空历史)
        consult_aurai(
            problem_type="design_issue",
            error_message="...",
            is_new_question=True  # [注意] 会清空之前关于问题A的所有对话
        )
        ```
        """
        config = get_aurai_config()
    
        logger.info(f"收到consult_aurai请求,问题类型: {problem_type},是否新问题: {is_new_question}")
    
        # [新问题] 处理新问题:两种方式触发清空历史
        # 方式1:明确标注 is_new_question=true
        # 方式2:自动检测(上一次对话已解决)
        should_clear_history = False
        clear_reason = ""
    
        if is_new_question:
            # 明确标注新问题
            should_clear_history = True
            clear_reason = "下级AI明确标注为新问题"
        elif _conversation_history:
            # 自动检测:检查上一次对话是否已解决
            last_entry = _conversation_history[-1]
            last_response = last_entry.get("response", {})
    
            if last_response.get("resolved", False):
                should_clear_history = True
                clear_reason = "上一次对话已解决(自动检测)"
    
        # 执行清空操作
        if should_clear_history:
            history_count = len(_conversation_history)
            _conversation_history.clear()
            logger.info(f"[新问题] 清空对话历史(清除 {history_count} 条记录)")
            logger.info(f"   原因: {clear_reason}")
            logger.info(f"   新问题: {problem_type} - {error_message[:100]}...")
    
        # 解析 context 参数(支持 JSON 字符串或字典)
        parsed_context: dict[str, Any] = {}
        if context:
            if isinstance(context, str):
                try:
                    parsed_context = json.loads(context)
                    logger.debug("已解析 JSON 格式的 context")
                except json.JSONDecodeError as e:
                    logger.warning(f"context JSON 解析失败: {e},使用空字典")
                    parsed_context = {}
            elif isinstance(context, dict):
                parsed_context = context
    
        # 构建提示词(如果有对反问的回答,加入上下文)
        current_context = parsed_context or {}
        if answers_to_questions:
            current_context["answers_to_questions"] = answers_to_questions
    
        prompt = build_consult_prompt(
            problem_type=problem_type,
            error_message=error_message,
            code_snippet=code_snippet,
            context=current_context,
            attempts_made=attempts_made,
            iteration=len(_conversation_history),
            conversation_history=_get_history(),
        )
    
        # 调用上级AI,传递对话历史
        client = get_aurai_client()
        response = await client.chat(
            user_message=prompt,
            conversation_history=_get_history()
        )
    
        # 记录到历史
        _add_to_history({
            "type": "consult",
            "problem_type": problem_type,
            "error_message": error_message,
            "response": response,
            "had_answers": answers_to_questions is not None,
        })
    
        # 根据上级顾问的响应状态返回不同格式
        if response.get("status") == "aligning":
            # 模式 A: 信息不足,需要补充
            logger.info(f"上级顾问要求补充信息,问题数: {len(response.get('questions', []))}")
            return {
                "status": "need_info",
                "message": "[提示] 上级顾问认为信息不足,请回答以下问题:",
                "questions_to_answer": response.get("questions", []),
                "instruction": "请搜集信息,再次调用 consult_aurai,并将答案填入 'answers_to_questions' 字段。",
                # ⭐ 相关工具提示
                "related_tools_hint": {
                    "sync_context": {
                        "description": "如果需要上传文档(.md/.txt)来补充上下文信息",
                        "example": "sync_context(operation='full_sync', files=['path/to/doc.md'])"
                    }
                }
            }
        else:
            # 模式 B: 信息充足,提供指导
            logger.info(f"上级顾问提供指导,resolved: {response.get('resolved', False)}")
    
            # 检查问题是否已解决,若解决则清空对话历史
            if response.get("resolved", False):
                history_count = len(_conversation_history)
                _conversation_history.clear()
                logger.info(f"[完成] 问题已解决,已清空对话历史(清除了 {history_count} 条记录)")
    
            return {
                "status": "success",
                "analysis": response.get("analysis"),
                "guidance": response.get("guidance"),
                "action_items": response.get("action_items", []),
                "code_changes": response.get("code_changes", []),
                "verification": response.get("verification"),
                "needs_another_iteration": response.get("needs_another_iteration", False),
                "resolved": response.get("resolved", False),
                "requires_human_intervention": response.get("requires_human_intervention", False),
                "hint": "[提示] 如需咨询新问题,下次调用时设置 is_new_question=true。这将清空之前的所有对话历史(包括之前的问题和上级AI的指导),但当前这条新问题会正常处理并保留在新的对话中",
            }
  • Tool registration using @mcp.tool() decorator from FastMCP framework - registers consult_aurai as an MCP tool
    @mcp.tool()
  • Input schema definition using Field validators - defines parameters (problem_type, error_message, code_snippet, context, attempts_made, answers_to_questions, is_new_question) with descriptions and validation
        problem_type: str = Field(
            description="问题类型: runtime_error, syntax_error, design_issue, other"
        ),
        error_message: str = Field(description="错误描述"),
        code_snippet: str | None = Field(default=None, description="相关代码片段"),
        context: Any = Field(default=None, description="上下文信息(支持 JSON 字符串或字典,会自动解析)"),
        attempts_made: str | None = Field(default=None, description="已尝试的解决方案"),
        answers_to_questions: str | None = Field(
            default=None,
            description="对上级顾问反问的回答(仅在多轮对话时使用)"
        ),
        is_new_question: bool = Field(
            default=False,
            description="[重要] 是否为新问题(新问题会清空之前的所有对话历史,确保干净的上下文)"
        ),
    ) -> dict[str, Any]:
  • build_consult_prompt helper function - constructs the prompt sent to the upper-level AI advisor, including problem information, context, code snippets, conversation history, and response format instructions
    def build_consult_prompt(
        problem_type: str,
        error_message: str,
        code_snippet: str | None = None,
        context: dict[str, Any] | None = None,
        attempts_made: str | None = None,
        iteration: int = 0,
        conversation_history: list[dict[str, str]] | None = None,
    ) -> str:
        """
        构建请求上级AI指导的提示词
    
        Args:
            problem_type: 问题类型
            error_message: 错误描述
            code_snippet: 相关代码片段
            context: 上下文信息
            attempts_made: 已尝试的解决方案
            iteration: 当前迭代次数
            conversation_history: 对话历史
        """
        context = context or {}
    
        # 构建上下文描述
        context_desc = []
        if file_path := context.get("file_path"):
            context_desc.append(f"- 文件路径: {file_path}")
        if line_number := context.get("line_number"):
            context_desc.append(f"- 行号: {line_number}")
        if terminal_output := context.get("terminal_output"):
            context_desc.append(f"- 终端输出:\n```\n{terminal_output}\n```")
    
        # 构建对话历史
        history_desc = ""
        if conversation_history:
            history_desc = "\n## 对话历史\n\n"
            for i, turn in enumerate(conversation_history[-5:], 1):  # 只保留最近5轮
                history_desc += f"### 第{i}轮\n"
                if "action" in turn:
                    history_desc += f"**执行操作**: {turn['action']}\n"
                if "result" in turn:
                    history_desc += f"**执行结果**: {turn['result']}\n"
                history_desc += "\n"
    
        prompt = f"""# 你是上级AI顾问
    
    ## 角色
    你是一位经验丰富的技术顾问,正在指导一位"本地AI助手"解决编程问题。
    
    ## 任务
    分析以下问题,提供清晰、可执行的指导。
    
    ## 问题信息
    - **问题类型**: {problem_type}
    - **错误描述**: {error_message}
    - **当前迭代**: 第 {iteration + 1} 轮
    
    ## 上下文
    {chr(10).join(context_desc) if context_desc else "无"}
    
    ## 代码片段
    {f"```{context.get('language', 'python')}\n{code_snippet}\n```" if code_snippet else "无"}
    
    ## 已尝试方案
    {attempts_made if attempts_made else "无"}
    {history_desc}
    
    ## 你的回应格式
    
    **重要**: 首先评估信息完整度,然后选择对应的模式返回。
    
    **信息不足时(缺少具体错误、代码、上下文)**:
    ```json
    {{
      "status": "aligning",
      "questions": ["需补充的问题1", "需补充的问题2"],
      "analysis": null,
      "guidance": null
    }}
    ```
    
    **信息充足时**:
    ```json
    {{
      "status": "guiding",
      "questions": [],
      "analysis": "问题分析 - 简明扼要分析问题根源",
      "guidance": "指导建议 - 具体建议的文字描述",
      "action_items": ["步骤1", "步骤2", "..."],
      "code_changes": [
        {{
          "file": "文件路径",
          "line": 行号,
          "old": "原代码",
          "new": "新代码"
        }}
      ],
      "verification": "验证方法",
      "needs_another_iteration": false,
      "resolved": false,
      "requires_human_intervention": false
    }}
    ```
    
    ## 字段说明
    
    - **status**: 必须是 "aligning"(信息不足)或 "guiding"(信息充足)
    - **questions**: 反问列表(仅在 aligning 模式)
    - **analysis**: 分析问题根本原因
    - **guidance**: 给出具体、可执行的指导建议
    - **action_items**: 数组,列出具体的执行步骤
    - **code_changes**: 数组(可选),如果需要修改代码,列出具体的代码变更
    - **verification**: 验证方法(新增)
    - **needs_another_iteration**: 布尔值,是否需要继续迭代
    - **resolved**: 布尔值,问题是否已解决
    - **requires_human_intervention**: 布尔值,是否需要人工介入
    
    ## 重要原则
    
    1. **指导要具体、可执行** - 避免模糊建议
    2. **最小改动原则** - 优先考虑最小改动来解决问题
    3. **分步骤指导** - 如果问题复杂,分步骤给出建议
    4. **承认限制** - 承认无法解决的问题,及时设置 requires_human_intervention=true
    5. **避免无限循环** - 如果连续多次建议相同方向,考虑人工介入
    
    现在,请分析上述问题并给出你的指导。
    """
        return prompt
  • AuraiClient.chat async method - handles communication with the OpenAI-compatible API, sends messages with conversation history, parses JSON responses, and handles errors
    async def chat(
        self,
        user_message: str,
        system_prompt: str | None = None,
        response_format: Literal["text", "json_object"] = "json_object",
        conversation_history: list[dict] | None = None,
    ) -> dict:
        """
        发送聊天请求
    
        Args:
            user_message: 用户消息
            system_prompt: 系统提示词
            response_format: 响应格式
            conversation_history: 对话历史(用于多轮对话)
    
        Returns:
            解析后的JSON响应
        """
        from .prompts import SYSTEM_PROMPT
    
        system_prompt = system_prompt or SYSTEM_PROMPT
    
        messages = []
        if system_prompt:
            messages.append({"role": "system", "content": system_prompt})
    
        # 添加对话历史
        history_messages = self._build_messages_from_history(conversation_history)
        messages.extend(history_messages)
    
        # 添加当前用户消息
        messages.append({"role": "user", "content": user_message})
    
        logger.info(f"发送请求到 {self.config.base_url},消息数: {len(messages)}")
    
        try:
            response = self._client.chat.completions.create(
                model=self.config.model,
                messages=messages,
                temperature=self.config.temperature,
                max_tokens=self.config.max_tokens,
            )
    
            content = response.choices[0].message.content
            logger.info(f"收到响应,长度: {len(content)}")
    
            # 尝试解析JSON
            try:
                # 清理可能存在的markdown代码块标记
                content_clean = content.strip()
                if content_clean.startswith("```json"):
                    content_clean = content_clean[7:]
                if content_clean.startswith("```"):
                    content_clean = content_clean[3:]
                if content_clean.endswith("```"):
                    content_clean = content_clean[:-3]
                content_clean = content_clean.strip()
    
                result = json.loads(content_clean)
                logger.info("成功解析JSON响应")
                return result
            except json.JSONDecodeError as e:
                logger.warning(f"JSON解析失败: {e},返回原始文本")
                return {
                    "analysis": "解析失败",
                    "guidance": content,
                    "action_items": [],
                    "needs_another_iteration": False,
                    "resolved": False,
                    "requires_human_intervention": True,
                }
    
        except Exception as e:
            logger.error(f"API请求失败: {e}")
            return {
                "analysis": f"请求失败: {str(e)}",
                "guidance": "请检查API密钥、Base URL和网络连接",
                "action_items": [],
                "needs_another_iteration": False,
                "resolved": False,
                "requires_human_intervention": True,
            }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and excels. It describes the multi-round interaction protocol (status='need_info' triggers follow-up calls), honest principle requirements (no fabrication), automatic history clearing when resolved=true, consequences of is_new_question (clears all prior context), and response formats. It adds rich context beyond what the input schema provides.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is comprehensive but overly long and not front-loaded. While it contains valuable information, it includes extensive formatting (markdown, code blocks, emojis) and repetitive sections (e.g., multiple warnings about truncation, redundant explanations of is_new_question). Some content could be condensed without losing clarity, making it less efficient than ideal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-round interaction, 7 parameters), no annotations, and the presence of an output schema, the description is exceptionally complete. It covers purpose, usage, behavioral protocols, parameter guidance, sibling tool relationships, and response handling. The output schema existence means return values needn't be explained, and the description fully compensates for the lack of annotations with detailed operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds significant value by explaining parameter usage in context: it specifies which parameters are required for first calls (problem_type, error_message, context, code_snippet) vs. follow-up calls (answers_to_questions), provides examples for code_snippet/context handling with sync_context, and clarifies the impact of is_new_question. However, it doesn't add deep semantic nuance beyond the schema's descriptions for all parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: '请求上级AI的指导' (request guidance from a higher-level AI) and '当本地AI遇到编程问题时调用此工具获取上级AI的指导建议' (call this tool when the local AI encounters programming problems to get guidance from a higher-level AI). It clearly distinguishes from siblings by explaining this is the '核心工具' (core tool) for obtaining AI guidance, while sibling tools handle context synchronization, progress reporting, and status checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides extensive usage guidelines, including when to use this tool ('当本地AI遇到编程问题时'), when to use sibling tools instead (e.g., use sync_context for uploading files to avoid truncation, report_progress after executing suggestions), and explicit alternatives. It also details when to set parameters like is_new_question and provides scenarios for manual vs. automatic context clearing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LZMW/mcp-aurai-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server