Skip to main content
Glama
LZMW

Aurai Advisor (上级顾问 MCP)

by LZMW

get_status

Retrieve current conversation status including iteration count, configuration details, and AI provider information for programming problem-solving sessions.

Instructions

获取当前状态

返回当前对话状态、迭代次数、配置信息等。


返回内容:conversation_history_count(对话历史数量)、max_iterations(最大迭代次数)、max_history(最大历史条数)、provider(AI提供商)、model(模型名称)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The get_status tool handler function - an async function decorated with @mcp.tool() that returns current conversation status including history count, max iterations, max history, provider, and model information.
    @mcp.tool()
    async def get_status() -> dict[str, Any]:
        """
        获取当前状态
    
        返回当前对话状态、迭代次数、配置信息等。
    
        ---
        **返回内容**:conversation_history_count(对话历史数量)、max_iterations(最大迭代次数)、max_history(最大历史条数)、provider(AI提供商)、model(模型名称)
        """
        return {
            "conversation_history_count": len(_conversation_history),
            "max_iterations": get_aurai_config().max_iterations,
            "max_history": server_config.max_history,
            "provider": get_aurai_config().provider,
            "model": get_aurai_config().model,
        }
  • The @mcp.tool() decorator that registers the get_status function as an MCP tool with the FastMCP framework.
    @mcp.tool()
  • AuraiConfig Pydantic model that defines the schema for AI provider configuration (max_iterations, provider, model) returned by get_status.
    class AuraiConfig(BaseModel):
        """上级AI配置"""
    
        # API提供商(固定为 custom,使用 OpenAI 兼容 API)
        provider: Literal["custom"] = Field(
            default="custom",
            description="AI服务提供商(固定使用自定义 OpenAI 兼容 API)"
        )
    
        # API密钥
        api_key: str = Field(
            default_factory=lambda: os.getenv("AURAI_API_KEY", ""),
            description="API密钥"
        )
    
        # API基础URL(可选,用于代理或自定义端点)
        base_url: str | None = Field(
            default_factory=lambda: os.getenv("AURAI_BASE_URL"),
            description="API基础URL"
        )
    
        # 模型名称
        model: str = Field(
            default_factory=lambda: os.getenv("AURAI_MODEL", "gpt-4o"),
            description="模型名称"
        )
    
        # 上下文窗口大小(tokens)- 默认基于 GLM-4.7,可通过环境变量覆盖
        context_window: int = Field(
            default_factory=lambda: int(os.getenv("AURAI_CONTEXT_WINDOW", str(DEFAULT_CONTEXT_WINDOW))),
            ge=1,
            description="模型上下文窗口大小(默认:200,000,基于 GLM-4.7)"
        )
    
        # 单条消息最大 tokens - 默认基于 GLM-4.7,可通过环境变量覆盖
        max_message_tokens: int = Field(
            default_factory=lambda: int(os.getenv("AURAI_MAX_MESSAGE_TOKENS", str(DEFAULT_MAX_MESSAGE_TOKENS))),
            ge=1,
            description="单条消息最大 tokens(默认:150,000,基于 GLM-4.7 优化)"
        )
    
        # 最大迭代次数
        max_iterations: int = Field(
            default=10,
            description="最大迭代次数"
        )
    
        # 温度参数
        temperature: float = Field(
            default=0.7,
            ge=0.0,
            le=2.0,
            description="温度参数"
        )
    
        # 最大生成 tokens - 默认基于 GLM-4.7,可通过环境变量覆盖
        max_tokens: int = Field(
            default_factory=lambda: int(os.getenv("AURAI_MAX_TOKENS", str(DEFAULT_MAX_TOKENS))),
            ge=1,
            description="最大生成 tokens(默认:32,000,基于 GLM-4.7 优化)"
        )
    
        @field_validator('api_key')
        @classmethod
        def validate_api_key(cls, v: str) -> str:
            """验证API密钥格式"""
            if not v or not v.strip():
                raise ValueError("API密钥不能为空")
    
            v = v.strip()
    
            # 基本长度验证(大多数API密钥至少20个字符)
            if len(v) < 10:
                raise ValueError("API密钥长度不能少于10个字符")
    
            # 基本格式验证(不能包含空格或特殊控制字符)
            if re.search(r'[\s\n\r\t]', v):
                raise ValueError("API密钥不能包含空格或控制字符")
    
            return v
  • ServerConfig Pydantic model that defines the schema for server configuration (max_history) returned by get_status.
    class ServerConfig(BaseModel):
        """服务器配置"""
    
        # 服务器名称
        name: str = "Aurai Advisor"
    
        # 日志级别
        log_level: Literal["DEBUG", "INFO", "WARNING", "ERROR"] = "INFO"
    
        # 对话历史最大保存数
        max_history: int = Field(
            default_factory=lambda: int(os.getenv("AURAI_MAX_HISTORY", "50")),
            ge=1,
            le=200,
            description="对话历史最大保存数"
        )
    
        # 启用对话历史持久化
        enable_persistence: bool = Field(
            default_factory=lambda: os.getenv("AURAI_ENABLE_PERSISTENCE", "true").lower() == "true",
            description="是否启用对话历史持久化到文件"
        )
    
        # 对话历史文件路径(固定在用户目录)
        history_path: str = Field(
            default_factory=lambda: os.getenv(
                "AURAI_HISTORY_PATH",
                str(Path.home() / ".mcp-aurai" / "history.json")
            ),
            description="对话历史文件路径"
        )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool returns (conversation state, iteration count, configuration) and lists specific return fields, which adds useful context about the tool's behavior. However, it doesn't mention whether this is a read-only operation, if it requires authentication, or any rate limits—important details for a status-checking tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first line states the purpose clearly, followed by details on return content. The use of a separator (---) and bullet points for return fields improves readability. However, the inclusion of both Chinese and English text slightly reduces efficiency, and some redundancy exists (e.g., stating return content in two ways).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no annotations, but has an output schema), the description is reasonably complete. It explains what the tool does and details the return values, which compensates for the lack of annotations. Since an output schema exists, the description doesn't need to fully explain return values, but it still provides a helpful overview. For a status-retrieval tool, this is adequate though not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage (empty schema), so the baseline is 4 as per the rules for zero parameters. The description appropriately doesn't discuss parameters since none exist, and it focuses on the return values instead, which is correct given the context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '获取当前状态' (get current status) and specifies it returns conversation state, iteration count, and configuration information. This is a specific verb+resource combination that distinguishes it from sibling tools like consult_aurai, report_progress, and sync_context, which appear to perform different functions. However, it doesn't explicitly contrast with siblings beyond implying different functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, appropriate contexts, or comparisons with sibling tools like consult_aurai or report_progress. The agent must infer usage from the purpose alone, which is insufficient for optimal tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LZMW/mcp-aurai-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server