ai_chat
Engage in multi-turn conversations with AI using custom prompts and message history to maintain context.
Instructions
与 AI 进行多轮对话。支持传入历史消息以保持上下文,支持自定义 system prompt。
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | 用户消息 | |
| system | No | 系统提示词(设定 AI 角色和行为) | |
| history | No | 历史消息列表,格式: [{"role":"user","content":"..."},{"role":"assistant","content":"..."}] | |
| temperature | No | 温度 0.0~2.0(默认 0.7,越高越有创意) |
Implementation Reference
- src/onion_mcp_server/tools/ai.py:206-216 (handler)The handler function for ai_chat. Receives arguments (message, system, history, temperature), constructs a messages list, and delegates to llm_chat for the LLM response.
async def handle_ai(name: str, arguments: dict) -> list[types.TextContent]: a = arguments if name == "ai_chat": messages = [] if a.get("system"): messages.append({"role": "system", "content": a["system"]}) messages.extend(a.get("history", [])) messages.append({"role": "user", "content": a["message"]}) reply = await llm_chat(messages, temperature=float(a.get("temperature", 0.7))) return [types.TextContent(type="text", text=reply)] - The Tool definition (schema) for ai_chat, defining the name, description, and inputSchema with properties: message (required), system, history (array of {role, content}), and temperature.
AI_TOOLS: list[types.Tool] = [ types.Tool( name="ai_chat", description=( "与 AI 进行多轮对话。支持传入历史消息以保持上下文," "支持自定义 system prompt。" ), inputSchema={ "type": "object", "properties": { "message": { "type": "string", "description": "用户消息", }, "system": { "type": "string", "description": "系统提示词(设定 AI 角色和行为)", "default": "", }, "history": { "type": "array", "description": "历史消息列表,格式: [{\"role\":\"user\",\"content\":\"...\"},{\"role\":\"assistant\",\"content\":\"...\"}]", "items": { "type": "object", "properties": { "role": {"type": "string", "enum": ["user", "assistant"]}, "content": {"type": "string"}, }, }, "default": [], }, "temperature": { "type": "number", "description": "温度 0.0~2.0(默认 0.7,越高越有创意)", "default": 0.7, }, }, "required": ["message"], }, ), - src/onion_mcp_server/server.py:49-51 (registration)Registration of ai_chat into the handler routing table: iterates AI_TOOLS and maps each tool name (including 'ai_chat') to handle_ai.
_HANDLERS: dict = {} for _t in AI_TOOLS: _HANDLERS[_t.name] = handle_ai - The llm_chat helper function that performs the actual LLM multi-turn chat call. Handles config, API key validation, OpenAI client creation, and error handling.
async def llm_chat( messages: list, temperature: float = 0.7, ) -> str: """多轮调用""" cfg = _get_config() if not cfg["api_key"]: return _no_key_message() try: from openai import AsyncOpenAI except ImportError: return ( "❌ 需要安装 openai 依赖:\n\n" "```bash\n" "pip install openai\n" "# 或\n" "uvx onion-mcp-server # 自动安装\n" "```" ) client = AsyncOpenAI( api_key=cfg["api_key"], base_url=cfg["base_url"], ) try: resp = await client.chat.completions.create( model=cfg["model"], messages=messages, temperature=temperature, max_tokens=cfg["max_tokens"], ) return resp.choices[0].message.content or "" except Exception as e: err = str(e) # 友好错误提示 if "401" in err or "authentication" in err.lower(): return f"❌ API Key 无效或已过期\n\n当前配置:\n base_url: {cfg['base_url']}\n model: {cfg['model']}\n\n错误: {e}" if "404" in err or "model" in err.lower(): return f"❌ 模型不存在: {cfg['model']}\n\n请设置 ONION_MCP_MODEL 为正确的模型名\n\n错误: {e}" if "429" in err: return f"❌ API 请求频率超限,请稍后重试\n\n错误: {e}" return f"❌ LLM 调用失败\n\n错误: {e}" - src/onion_mcp_server/tools/__init__.py:1-1 (registration)Re-exports AI_TOOLS and handle_ai from the ai module so they can be imported by server.py.
from onion_mcp_server.tools.ai import AI_TOOLS, handle_ai