ai_classify
Classifies text by sentiment, topic, intent, or custom labels, with optional reasoning for the classification.
Instructions
对文本进行分类,支持情感分析、主题分类、意图识别,或自定义分类标签。
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | 要分类的文本 | |
| task | No | 分类任务: sentiment(情感)/ topic(主题)/ intent(意图)/ custom(自定义) | sentiment |
| labels | No | 自定义分类标签(task=custom 时必填),如 ["投诉","咨询","建议"] | |
| explain | No | 是否输出分类理由(默认 true) |
Implementation Reference
- src/onion_mcp_server/tools/ai.py:301-338 (handler)Handler function for ai_classify tool. Builds a prompt based on the task (sentiment/topic/intent/custom) and calls llm_call to classify the input text. For custom tasks, it requires labels and returns an error if missing.
elif name == "ai_classify": task = a.get("task", "sentiment") explain = bool(a.get("explain", True)) explain_str = "并简要说明理由" if explain else ",只输出分类结果,不需要解释" if task == "sentiment": prompt = ( f"请对以下文本进行情感分析,判断情感倾向(正面/负面/中性){explain_str}。\n\n" f"{a['text']}" ) elif task == "topic": prompt = ( f"请判断以下文本的主题类别(如:科技、政治、经济、体育、娱乐、教育、健康等){explain_str}。\n\n" f"{a['text']}" ) elif task == "intent": prompt = ( f"请识别以下文本的用户意图(如:查询、投诉、购买、咨询、反馈等){explain_str}。\n\n" f"{a['text']}" ) elif task == "custom": labels = a.get("labels", []) if not labels: return [types.TextContent(type="text", text="❌ task=custom 时必须提供 labels 参数")] labels_str = "、".join(f'"{line}"' for line in labels) prompt = ( f"请将以下文本分类到这些类别之一:{labels_str}。\n" f"输出格式:分类结果{explain_str}。\n\n" f"{a['text']}" ) else: raise ValueError(f"未知分类任务: {task}") reply = await llm_call(prompt) return [types.TextContent(type="text", text=reply)] raise ValueError(f"未知 ai 工具: {name}") - Schema definition for ai_classify tool. Defines input parameters: text (required), task (sentiment/topic/intent/custom), labels (for custom task), and explain (boolean).
types.Tool( name="ai_classify", description="对文本进行分类,支持情感分析、主题分类、意图识别,或自定义分类标签。", inputSchema={ "type": "object", "properties": { "text": { "type": "string", "description": "要分类的文本", }, "task": { "type": "string", "description": "分类任务: sentiment(情感)/ topic(主题)/ intent(意图)/ custom(自定义)", "enum": ["sentiment", "topic", "intent", "custom"], "default": "sentiment", }, "labels": { "type": "array", "items": {"type": "string"}, "description": "自定义分类标签(task=custom 时必填),如 [\"投诉\",\"咨询\",\"建议\"]", "default": [], }, "explain": { "type": "boolean", "description": "是否输出分类理由(默认 true)", "default": True, }, }, "required": ["text"], }, ), - src/onion_mcp_server/server.py:50-61 (registration)Registration: ai_classify is registered in the _HANDLERS routing table via iterating over AI_TOOLS and mapping each tool name to the handle_ai handler function.
for _t in AI_TOOLS: _HANDLERS[_t.name] = handle_ai for _t in CODE_TOOLS: _HANDLERS[_t.name] = handle_code for _t in TEXT_TOOLS: _HANDLERS[_t.name] = handle_text for _t in DATA_TOOLS: _HANDLERS[_t.name] = handle_data for _t in WEB_TOOLS: _HANDLERS[_t.name] = handle_web for _t in SYSTEM_TOOLS: _HANDLERS[_t.name] = handle_system - Helper llm_call function used by ai_classify handler to send the classification prompt to the LLM and return the response.
async def llm_call( prompt: str, system: Optional[str] = None, temperature: float = 0.7, ) -> str: """单轮调用""" messages = [] if system: messages.append({"role": "system", "content": system}) messages.append({"role": "user", "content": prompt}) return await llm_chat(messages, temperature=temperature)