code_explain
Gain clear explanations of code functionality and logic across all major programming languages, with adjustable detail levels from brief to detailed.
Instructions
解释代码的功能和逻辑,支持所有主流编程语言。
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | 要解释的代码 | |
| language | No | 编程语言(可选,自动检测) | |
| detail | No | normal |
Implementation Reference
- The handler function handle_code dispatches tool requests by name. For 'code_explain' (lines 109-117), it builds a prompt using detail level (brief/normal/detailed), language, and the user's code, then calls llm_call to get an explanation, and returns the result as TextContent.
async def handle_code(name: str, arguments: dict) -> list[types.TextContent]: a = arguments if name == "code_explain": detail_map = {"brief": "用一两句话简短", "detailed": "非常详细地", "normal": ""} detail_str = detail_map.get(a.get("detail", "normal"), "") lang = a.get("language", "") or "(自动检测)" prompt = ( f"请{detail_str}解释以下 {lang} 代码的功能和逻辑," f"包括:整体作用、关键步骤、重要变量/函数的含义。\n\n" f"```\n{a['code']}\n```" ) elif name == "code_review": focus = a.get("focus", ["all"]) focus_str = "所有方面(Bug、安全、性能、代码风格)" if "all" in focus else "、".join(focus) lang = a.get("language", "") or "" prompt = ( f"请对以下 {lang} 代码进行代码审查,重点关注:{focus_str}。\n" f"请按以下格式输出:\n" f"1. 问题列表(严重程度 + 描述 + 修复建议)\n" f"2. 整体评分(1-10)\n" f"3. 改进后的代码(如有必要)\n\n" f"```\n{a['code']}\n```" ) elif name == "code_generate": style_map = { "simple": "简洁易读", "production": "生产级别(含完整错误处理、日志、类型注解)", } style_str = style_map.get(a.get("style", "simple"), "简洁易读") lang = a.get("language", "Python") prompt = ( f"请用 {lang} 编写以下功能的代码,风格要求:{style_str}。\n" f"只输出代码和必要的注释,不需要额外解释。\n\n" f"需求:{a['description']}" ) elif name == "code_convert": prompt = ( f"请将以下 {a['from_language']} 代码转换为 {a['to_language']}。\n" f"要求:保持逻辑完全一致,使用目标语言的惯用写法,只输出转换后的代码。\n\n" f"```{a['from_language'].lower()}\n{a['code']}\n```" ) elif name == "code_fix": err_part = f"\n\n错误信息:\n```\n{a['error_message']}\n```" if a.get("error_message") else "" lang = a.get("language", "") or "" prompt = ( f"请修复以下 {lang} 代码中的错误。\n" f"要求:输出修复后的完整代码,并在代码前简要说明修复了什么问题。\n\n" f"```\n{a['code']}\n```{err_part}" ) elif name == "code_docstring": style = a.get("style", "google") language = a.get("language", "Python") prompt = ( f"请为以下 {language} 代码生成 {style} 风格的文档注释(docstring)。\n" f"要求:为每个函数/类/方法添加完整的参数说明、返回值说明、异常说明(如有)。\n" f"只输出添加了文档注释的完整代码。\n\n" f"```\n{a['code']}\n```" ) else: raise ValueError(f"未知 code 工具: {name}") reply = await llm_call(prompt) return [types.TextContent(type="text", text=reply)] - The tool definition (input schema) for 'code_explain'. Declares name='code_explain', description='解释代码的功能和逻辑,支持所有主流编程语言。', and inputSchema with properties: code (required string), language (optional string), detail (optional enum: brief/normal/detailed).
CODE_TOOLS: list[types.Tool] = [ types.Tool( name="code_explain", description="解释代码的功能和逻辑,支持所有主流编程语言。", inputSchema={ "type": "object", "properties": { "code": {"type": "string", "description": "要解释的代码"}, "language": {"type": "string", "description": "编程语言(可选,自动检测)", "default": ""}, "detail": { "type": "string", "enum": ["brief", "normal", "detailed"], "default": "normal", }, }, "required": ["code"], }, ), - src/onion_mcp_server/server.py:52-53 (registration)Registration of the 'code_explain' tool: the _HANDLERS dict maps the tool's name to the handle_code handler function, so when a call_tool request comes in with name='code_explain', it dispatches to handle_code.
for _t in CODE_TOOLS: _HANDLERS[_t.name] = handle_code - src/onion_mcp_server/server.py:39-46 (registration)ALL_TOOLS aggregates all tool definitions (including CODE_TOOLS) and is returned by the list_tools handler, making 'code_explain' discoverable by clients.
ALL_TOOLS: list[types.Tool] = [ *AI_TOOLS, *CODE_TOOLS, *TEXT_TOOLS, *DATA_TOOLS, *WEB_TOOLS, *SYSTEM_TOOLS, ] - The llm_call helper function is called by handle_code to send the constructed prompt to an LLM (OpenAI-compatible API) and return the generated explanation text.
async def llm_call( prompt: str, system: Optional[str] = None, temperature: float = 0.7, ) -> str: """单轮调用""" messages = [] if system: messages.append({"role": "system", "content": system}) messages.append({"role": "user", "content": prompt}) return await llm_chat(messages, temperature=temperature)