Skip to main content
Glama
onion-ai

onion-mcp-server

Official
by onion-ai

ai_classify

Classifies text by sentiment, topic, intent, or custom labels, with optional reasoning for the classification.

Instructions

对文本进行分类,支持情感分析、主题分类、意图识别,或自定义分类标签。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes要分类的文本
taskNo分类任务: sentiment(情感)/ topic(主题)/ intent(意图)/ custom(自定义)sentiment
labelsNo自定义分类标签(task=custom 时必填),如 ["投诉","咨询","建议"]
explainNo是否输出分类理由(默认 true)

Implementation Reference

  • Handler function for ai_classify tool. Builds a prompt based on the task (sentiment/topic/intent/custom) and calls llm_call to classify the input text. For custom tasks, it requires labels and returns an error if missing.
    elif name == "ai_classify":
        task    = a.get("task", "sentiment")
        explain = bool(a.get("explain", True))
        explain_str = "并简要说明理由" if explain else ",只输出分类结果,不需要解释"
    
        if task == "sentiment":
            prompt = (
                f"请对以下文本进行情感分析,判断情感倾向(正面/负面/中性){explain_str}。\n\n"
                f"{a['text']}"
            )
        elif task == "topic":
            prompt = (
                f"请判断以下文本的主题类别(如:科技、政治、经济、体育、娱乐、教育、健康等){explain_str}。\n\n"
                f"{a['text']}"
            )
        elif task == "intent":
            prompt = (
                f"请识别以下文本的用户意图(如:查询、投诉、购买、咨询、反馈等){explain_str}。\n\n"
                f"{a['text']}"
            )
        elif task == "custom":
            labels = a.get("labels", [])
            if not labels:
                return [types.TextContent(type="text",
                    text="❌ task=custom 时必须提供 labels 参数")]
            labels_str = "、".join(f'"{line}"' for line in labels)
            prompt = (
                f"请将以下文本分类到这些类别之一:{labels_str}。\n"
                f"输出格式:分类结果{explain_str}。\n\n"
                f"{a['text']}"
            )
        else:
            raise ValueError(f"未知分类任务: {task}")
    
        reply = await llm_call(prompt)
        return [types.TextContent(type="text", text=reply)]
    
    raise ValueError(f"未知 ai 工具: {name}")
  • Schema definition for ai_classify tool. Defines input parameters: text (required), task (sentiment/topic/intent/custom), labels (for custom task), and explain (boolean).
    types.Tool(
        name="ai_classify",
        description="对文本进行分类,支持情感分析、主题分类、意图识别,或自定义分类标签。",
        inputSchema={
            "type": "object",
            "properties": {
                "text": {
                    "type":        "string",
                    "description": "要分类的文本",
                },
                "task": {
                    "type":        "string",
                    "description": "分类任务: sentiment(情感)/ topic(主题)/ intent(意图)/ custom(自定义)",
                    "enum":        ["sentiment", "topic", "intent", "custom"],
                    "default":     "sentiment",
                },
                "labels": {
                    "type":        "array",
                    "items":       {"type": "string"},
                    "description": "自定义分类标签(task=custom 时必填),如 [\"投诉\",\"咨询\",\"建议\"]",
                    "default":     [],
                },
                "explain": {
                    "type":        "boolean",
                    "description": "是否输出分类理由(默认 true)",
                    "default":     True,
                },
            },
            "required": ["text"],
        },
    ),
  • Registration: ai_classify is registered in the _HANDLERS routing table via iterating over AI_TOOLS and mapping each tool name to the handle_ai handler function.
    for _t in AI_TOOLS:     
        _HANDLERS[_t.name] = handle_ai
    for _t in CODE_TOOLS:   
        _HANDLERS[_t.name] = handle_code
    for _t in TEXT_TOOLS:   
        _HANDLERS[_t.name] = handle_text
    for _t in DATA_TOOLS:   
        _HANDLERS[_t.name] = handle_data
    for _t in WEB_TOOLS:    
        _HANDLERS[_t.name] = handle_web
    for _t in SYSTEM_TOOLS: 
        _HANDLERS[_t.name] = handle_system
  • Helper llm_call function used by ai_classify handler to send the classification prompt to the LLM and return the response.
    async def llm_call(
        prompt: str,
        system: Optional[str] = None,
        temperature: float = 0.7,
    ) -> str:
        """单轮调用"""
        messages = []
        if system:
            messages.append({"role": "system", "content": system})
        messages.append({"role": "user", "content": prompt})
        return await llm_chat(messages, temperature=temperature)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but the description does not disclose any behavioral traits beyond what the schema already shows (e.g., no mention of model details, latency, or error handling).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy. Efficiently conveys core purpose, though it could be slightly expanded with usage context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple classification tool with good schema coverage, but lacks guidance on custom label behavior and the 'explain' parameter, and no output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions. The description adds no extra meaning beyond listing task types, which are already in the enum.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it classifies text and lists supported types (sentiment, topic, intent, custom), clearly distinguishing from sibling tools like ai_chat or ai_extract.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. While the purpose implies classification, there is no comparison to alternatives or mention of prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/onion-ai/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server