delegate_to_deepseek
Run a sub-agent with its own tool loop to handle batch, repetitive, or mechanical tasks end-to-end, saving main conversation tokens.
Instructions
Delegate a focused task to DeepSeek as a real sub-agent.
DeepSeek runs its own agent loop with Read/Write/Edit/Bash/Glob/Grep/NotebookEdit tools inside the configured workspace. Use this for batch / repetitive / mechanical tasks where you want to save main-conversation tokens and let DeepSeek do the heavy lifting end-to-end.
Good fits:
Extract i18n keys from N files into JSON
Translate large chunks of text
Scan logs for patterns
Bulk refactors with a clear pattern
One-off ETL scripts
Bad fits (do it yourself instead):
Architectural design / cross-file judgment
Bug root-cause analysis
Tasks requiring project-specific idioms from CLAUDE.md or other repo conventions
Args: task: Clear description of what DeepSeek should accomplish, including success criteria and file paths involved. context: Optional additional context — project conventions, related files DeepSeek should consider, output format requirements. Include this when project-specific knowledge matters.
Returns: A summary of what DeepSeek did, including files affected, turns used, tokens consumed, and any issues. Always verify the result by reading a sample of the affected files before declaring success to the user.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| task | Yes | ||
| context | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |
Implementation Reference
- src/deepseek_mcp/server.py:97-99 (registration)The tool is registered as an MCP tool via the @mcp.tool() decorator on the `delegate_to_deepseek` function. Registration occurs at line 97.
@mcp.tool() def delegate_to_deepseek(task: str, context: str = "") -> str: """Delegate a focused task to DeepSeek as a real sub-agent. - src/deepseek_mcp/server.py:98-196 (handler)The handler function `delegate_to_deepseek(task, context)` that implements the tool logic: checks DEEPSEEK_MODE, loads Config, prepends context to task, calls run_agent(), logs usage, and returns a summary.
def delegate_to_deepseek(task: str, context: str = "") -> str: """Delegate a focused task to DeepSeek as a real sub-agent. DeepSeek runs its own agent loop with Read/Write/Edit/Bash/Glob/Grep/NotebookEdit tools inside the configured workspace. Use this for batch / repetitive / mechanical tasks where you want to save main-conversation tokens and let DeepSeek do the heavy lifting end-to-end. Good fits: - Extract i18n keys from N files into JSON - Translate large chunks of text - Scan logs for patterns - Bulk refactors with a clear pattern - One-off ETL scripts Bad fits (do it yourself instead): - Architectural design / cross-file judgment - Bug root-cause analysis - Tasks requiring project-specific idioms from CLAUDE.md or other repo conventions Args: task: Clear description of what DeepSeek should accomplish, including success criteria and file paths involved. context: Optional additional context — project conventions, related files DeepSeek should consider, output format requirements. Include this when project-specific knowledge matters. Returns: A summary of what DeepSeek did, including files affected, turns used, tokens consumed, and any issues. Always verify the result by reading a sample of the affected files before declaring success to the user. """ mode = os.getenv("DEEPSEEK_MODE", "auto") if mode == "off": return ( "DeepSeek delegation is disabled (DEEPSEEK_MODE=off). " "Continue the task yourself in the main conversation." ) try: config = Config.load() except Exception as e: return f"ERROR: deepseek-mcp not configured: {e}" full_task = task if context: full_task = f"{task}\n\n# Additional context\n{context}" logger.info("delegate_to_deepseek invoked. Task length=%d, context length=%d", len(task), len(context)) try: result = run_agent(full_task, config) except AgentLoopError as e: logger.exception("Agent loop failed") return f"ERROR: DeepSeek agent loop failed: {e}" except Exception as e: logger.exception("Unexpected error during delegation") return f"ERROR: unexpected failure: {e}" logger.info( "delegate_to_deepseek done. turns=%d tool_calls=%d tokens=%d duration=%.2fs", result["turns_used"], result["tool_calls"], result["tokens"]["total"], result["duration_seconds"], ) # 用量记录(人类可读追加到 usage.log) # 注意:只记 task 前 60 字符摘要,不记 context(context 可能含项目敏感细节) try: # 简单大小控制:>10MB 时轮转一次(rename 为 .1) if _USAGE_LOG.exists() and _USAGE_LOG.stat().st_size > 10 * 1024 * 1024: try: _USAGE_LOG.replace(_USAGE_LOG.with_suffix(".log.1")) except OSError: pass with open(_USAGE_LOG, "a", encoding="utf-8") as f: f.write( f"{result['duration_seconds']:.1f}s " f"turns={result['turns_used']:>2} " f"tools={result['tool_calls']:>2} " f"tokens={result['tokens']['total']:>6} " f"task={task[:60]!r}\n" ) try: os.chmod(_USAGE_LOG, 0o600) except OSError: pass except Exception: pass # 日志失败不影响主流程 return ( f"{result['final_message']}\n\n" f"---\n" f"[deepseek-mcp] {result['turns_used']} turns, " f"{result['tool_calls']} tool calls, " f"{result['tokens']['total']} tokens, " f"{result['duration_seconds']}s" ) - The `run_agent()` helper function invoked by the handler. It builds the system prompt with tools/workspace, runs the DeepSeek chat completion loop with tool call execution, and returns results.
def run_agent(task: str, config: Config) -> dict: """跑完整 agent loop。 返回 dict: - final_message: str (DeepSeek 给的最终答复) - turns_used: int - tokens: {prompt, completion, total} - tool_calls: int - duration_seconds: float """ client = OpenAI(api_key=config.api_key, base_url=config.base_url) tools = build_tool_schemas(config.allowed_tools) system_prompt = SYSTEM_PROMPT_TEMPLATE.format( tools=", ".join(config.allowed_tools), workspace=config.workspace, ) messages: list[dict] = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": task}, ] total_prompt_tokens = 0 total_completion_tokens = 0 tool_call_count = 0 started = time.time() for turn in range(config.max_turns): response = _call_with_retry(client, config, messages, tools, turn) usage = response.usage if usage: total_prompt_tokens += usage.prompt_tokens total_completion_tokens += usage.completion_tokens msg = response.choices[0].message # 用 raw dict 保留所有字段,包括 DeepSeek v4-pro thinking mode 的 reasoning_content # —— 它要求下一轮必须把 reasoning_content 也回传,否则 400 报错 raw = response.model_dump(exclude_none=True) msg_dict = raw["choices"][0]["message"] messages.append(msg_dict) # 没有 tool_calls 说明 DeepSeek 决定结束 if not msg.tool_calls: return { "final_message": msg.content or "(empty response)", "turns_used": turn + 1, "tokens": { "prompt": total_prompt_tokens, "completion": total_completion_tokens, "total": total_prompt_tokens + total_completion_tokens, }, "tool_calls": tool_call_count, "duration_seconds": round(time.time() - started, 2), } # 依次执行 tool calls for tc in msg.tool_calls: tool_call_count += 1 tool_name = tc.function.name try: args = json.loads(tc.function.arguments) except json.JSONDecodeError as e: result = f"ERROR: invalid JSON in tool arguments: {e}" else: logger.info( "Turn %d tool_call: %s(%s)", turn, tool_name, _redact_args_for_log(args), ) result = execute_tool(tool_name, args, config.workspace) messages.append( { "role": "tool", "tool_call_id": tc.id, "content": result, } ) # 跑到 max_turns 没收敛 —— 只展示最后一条 assistant content,不夹带完整 tool_calls blob last_text = "" for m in reversed(messages): if m.get("role") == "assistant" and m.get("content"): last_text = str(m["content"])[:500] break raise AgentLoopError( f"Agent loop exceeded max_turns ({config.max_turns}). " f"Last assistant text: {last_text or '(none)'}" ) - src/deepseek_mcp/server.py:98-132 (schema)The tool's input schema is defined implicitly via the function signature: `task: str` (required) and `context: str = ""` (optional). The docstring serves as the description for both the tool and its parameters (FastMCP convention).
def delegate_to_deepseek(task: str, context: str = "") -> str: """Delegate a focused task to DeepSeek as a real sub-agent. DeepSeek runs its own agent loop with Read/Write/Edit/Bash/Glob/Grep/NotebookEdit tools inside the configured workspace. Use this for batch / repetitive / mechanical tasks where you want to save main-conversation tokens and let DeepSeek do the heavy lifting end-to-end. Good fits: - Extract i18n keys from N files into JSON - Translate large chunks of text - Scan logs for patterns - Bulk refactors with a clear pattern - One-off ETL scripts Bad fits (do it yourself instead): - Architectural design / cross-file judgment - Bug root-cause analysis - Tasks requiring project-specific idioms from CLAUDE.md or other repo conventions Args: task: Clear description of what DeepSeek should accomplish, including success criteria and file paths involved. context: Optional additional context — project conventions, related files DeepSeek should consider, output format requirements. Include this when project-specific knowledge matters. Returns: A summary of what DeepSeek did, including files affected, turns used, tokens consumed, and any issues. Always verify the result by reading a sample of the affected files before declaring success to the user. """ mode = os.getenv("DEEPSEEK_MODE", "auto") if mode == "off": return (