ai_translate
Translate text into a target language with automatic source detection. Supports over 40 languages including Chinese, English, Japanese, and more.
Instructions
将文本翻译为目标语言。自动检测源语言,支持中文、英文、日文、韩文、法文、德文、西班牙文等 40+ 语言。
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | 要翻译的文本 | |
| target_language | No | 目标语言,如:中文、英文、日文、法文、德文、西班牙文、韩文、俄文等 | 中文 |
| style | No | 翻译风格: formal(正式)/ casual(口语)/ literal(直译) | formal |
Implementation Reference
- src/onion_mcp_server/tools/ai.py:218-232 (handler)The handler function for ai_translate tool. Reads arguments (text, target_language, style), builds a prompt with the target language and translation style, and calls llm_call to get the translation result.
elif name == "ai_translate": style_map = { "formal": "正式、专业", "casual": "口语化、自然", "literal": "直译、忠实原文", } style_str = style_map.get(a.get("style", "formal"), "正式、专业") prompt = ( f"请将以下文本翻译为{a.get('target_language', '中文')}," f"翻译风格:{style_str}。\n" f"只输出译文,不要解释,不要保留原文。\n\n" f"{a['text']}" ) reply = await llm_call(prompt) return [types.TextContent(type="text", text=reply)] - Input schema for ai_translate tool definition. Declares 'text' (required), 'target_language' (default: 中文), and 'style' (formal/casual/literal, default: formal) as inputs.
types.Tool( name="ai_translate", description=( "将文本翻译为目标语言。自动检测源语言," "支持中文、英文、日文、韩文、法文、德文、西班牙文等 40+ 语言。" ), inputSchema={ "type": "object", "properties": { "text": { "type": "string", "description": "要翻译的文本", }, "target_language": { "type": "string", "description": "目标语言,如:中文、英文、日文、法文、德文、西班牙文、韩文、俄文等", "default": "中文", }, "style": { "type": "string", "description": "翻译风格: formal(正式)/ casual(口语)/ literal(直译)", "enum": ["formal", "casual", "literal"], "default": "formal", }, }, "required": ["text"], }, ), - src/onion_mcp_server/server.py:48-51 (registration)Registration of ai_translate (via AI_TOOLS) in the server's handler routing table, mapping each AI tool name to handle_ai.
# ── 路由表 ──────────────────────────────────────────────────── _HANDLERS: dict = {} for _t in AI_TOOLS: _HANDLERS[_t.name] = handle_ai - src/onion_mcp_server/tools/ai.py:17-203 (registration)AI_TOOLS list registration containing the Tool definition for ai_translate (lines 57-84) along with other AI tools.
AI_TOOLS: list[types.Tool] = [ types.Tool( name="ai_chat", description=( "与 AI 进行多轮对话。支持传入历史消息以保持上下文," "支持自定义 system prompt。" ), inputSchema={ "type": "object", "properties": { "message": { "type": "string", "description": "用户消息", }, "system": { "type": "string", "description": "系统提示词(设定 AI 角色和行为)", "default": "", }, "history": { "type": "array", "description": "历史消息列表,格式: [{\"role\":\"user\",\"content\":\"...\"},{\"role\":\"assistant\",\"content\":\"...\"}]", "items": { "type": "object", "properties": { "role": {"type": "string", "enum": ["user", "assistant"]}, "content": {"type": "string"}, }, }, "default": [], }, "temperature": { "type": "number", "description": "温度 0.0~2.0(默认 0.7,越高越有创意)", "default": 0.7, }, }, "required": ["message"], }, ), types.Tool( name="ai_translate", description=( "将文本翻译为目标语言。自动检测源语言," "支持中文、英文、日文、韩文、法文、德文、西班牙文等 40+ 语言。" ), inputSchema={ "type": "object", "properties": { "text": { "type": "string", "description": "要翻译的文本", }, "target_language": { "type": "string", "description": "目标语言,如:中文、英文、日文、法文、德文、西班牙文、韩文、俄文等", "default": "中文", }, "style": { "type": "string", "description": "翻译风格: formal(正式)/ casual(口语)/ literal(直译)", "enum": ["formal", "casual", "literal"], "default": "formal", }, }, "required": ["text"], }, ), types.Tool( name="ai_summarize", description="对长文本进行摘要,支持要点列表、段落摘要、一句话摘要三种模式。", inputSchema={ "type": "object", "properties": { "text": { "type": "string", "description": "要摘要的文本", }, "mode": { "type": "string", "description": "摘要模式: bullets(要点列表)/ paragraph(段落)/ one_line(一句话)", "enum": ["bullets", "paragraph", "one_line"], "default": "bullets", }, "max_points": { "type": "integer", "description": "要点数量(bullets 模式有效,默认 5)", "default": 5, }, "language": { "type": "string", "description": "输出语言(默认与原文相同)", "default": "", }, }, "required": ["text"], }, ), types.Tool( name="ai_rewrite", description="改写文本,支持正式化、口语化、简洁化、扩写四种模式。", inputSchema={ "type": "object", "properties": { "text": { "type": "string", "description": "要改写的文本", }, "mode": { "type": "string", "description": "改写模式: formal(正式)/ casual(口语)/ concise(简洁)/ expand(扩写)", "enum": ["formal", "casual", "concise", "expand"], "default": "formal", }, "instruction": { "type": "string", "description": "额外改写要求(可选),如:'保持技术术语不变'", "default": "", }, }, "required": ["text"], }, ), types.Tool( name="ai_extract", description=( "从文本中提取结构化信息,支持:人名、地名、时间、" "关键词、数字、邮箱、URL、自定义字段。" ), inputSchema={ "type": "object", "properties": { "text": { "type": "string", "description": "要提取信息的文本", }, "fields": { "type": "array", "items": {"type": "string"}, "description": ( "要提取的字段列表,如: [\"人名\",\"地名\",\"时间\",\"关键词\",\"数字\",\"邮箱\",\"URL\"] " "或自定义字段如 [\"产品名\",\"价格\",\"联系方式\"]" ), "default": ["关键词", "人名", "地名", "时间"], }, "output_format": { "type": "string", "description": "输出格式: json / markdown(默认 markdown)", "enum": ["json", "markdown"], "default": "markdown", }, }, "required": ["text"], }, ), types.Tool( name="ai_classify", description="对文本进行分类,支持情感分析、主题分类、意图识别,或自定义分类标签。", inputSchema={ "type": "object", "properties": { "text": { "type": "string", "description": "要分类的文本", }, "task": { "type": "string", "description": "分类任务: sentiment(情感)/ topic(主题)/ intent(意图)/ custom(自定义)", "enum": ["sentiment", "topic", "intent", "custom"], "default": "sentiment", }, "labels": { "type": "array", "items": {"type": "string"}, "description": "自定义分类标签(task=custom 时必填),如 [\"投诉\",\"咨询\",\"建议\"]", "default": [], }, "explain": { "type": "boolean", "description": "是否输出分类理由(默认 true)", "default": True, }, }, "required": ["text"], }, ), ] - The llm_call helper function used by ai_translate handler to make the single-turn LLM API call with the constructed prompt.
async def llm_call( prompt: str, system: Optional[str] = None, temperature: float = 0.7, ) -> str: """单轮调用""" messages = [] if system: messages.append({"role": "system", "content": system}) messages.append({"role": "user", "content": prompt}) return await llm_chat(messages, temperature=temperature)