Skip to main content
Glama

delegate_to_deepseek

Run a sub-agent with its own tool loop to handle batch, repetitive, or mechanical tasks end-to-end, saving main conversation tokens.

Instructions

Delegate a focused task to DeepSeek as a real sub-agent.

DeepSeek runs its own agent loop with Read/Write/Edit/Bash/Glob/Grep/NotebookEdit tools inside the configured workspace. Use this for batch / repetitive / mechanical tasks where you want to save main-conversation tokens and let DeepSeek do the heavy lifting end-to-end.

Good fits:

  • Extract i18n keys from N files into JSON

  • Translate large chunks of text

  • Scan logs for patterns

  • Bulk refactors with a clear pattern

  • One-off ETL scripts

Bad fits (do it yourself instead):

  • Architectural design / cross-file judgment

  • Bug root-cause analysis

  • Tasks requiring project-specific idioms from CLAUDE.md or other repo conventions

Args: task: Clear description of what DeepSeek should accomplish, including success criteria and file paths involved. context: Optional additional context — project conventions, related files DeepSeek should consider, output format requirements. Include this when project-specific knowledge matters.

Returns: A summary of what DeepSeek did, including files affected, turns used, tokens consumed, and any issues. Always verify the result by reading a sample of the affected files before declaring success to the user.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
taskYes
contextNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The tool is registered as an MCP tool via the @mcp.tool() decorator on the `delegate_to_deepseek` function. Registration occurs at line 97.
    @mcp.tool()
    def delegate_to_deepseek(task: str, context: str = "") -> str:
        """Delegate a focused task to DeepSeek as a real sub-agent.
  • The handler function `delegate_to_deepseek(task, context)` that implements the tool logic: checks DEEPSEEK_MODE, loads Config, prepends context to task, calls run_agent(), logs usage, and returns a summary.
    def delegate_to_deepseek(task: str, context: str = "") -> str:
        """Delegate a focused task to DeepSeek as a real sub-agent.
    
        DeepSeek runs its own agent loop with Read/Write/Edit/Bash/Glob/Grep/NotebookEdit tools
        inside the configured workspace. Use this for batch / repetitive / mechanical
        tasks where you want to save main-conversation tokens and let DeepSeek do the
        heavy lifting end-to-end.
    
        Good fits:
          - Extract i18n keys from N files into JSON
          - Translate large chunks of text
          - Scan logs for patterns
          - Bulk refactors with a clear pattern
          - One-off ETL scripts
    
        Bad fits (do it yourself instead):
          - Architectural design / cross-file judgment
          - Bug root-cause analysis
          - Tasks requiring project-specific idioms from CLAUDE.md or other repo conventions
    
        Args:
            task: Clear description of what DeepSeek should accomplish, including
                  success criteria and file paths involved.
            context: Optional additional context — project conventions, related
                     files DeepSeek should consider, output format requirements.
                     Include this when project-specific knowledge matters.
    
        Returns:
            A summary of what DeepSeek did, including files affected, turns used,
            tokens consumed, and any issues. Always verify the result by reading
            a sample of the affected files before declaring success to the user.
        """
        mode = os.getenv("DEEPSEEK_MODE", "auto")
        if mode == "off":
            return (
                "DeepSeek delegation is disabled (DEEPSEEK_MODE=off). "
                "Continue the task yourself in the main conversation."
            )
    
        try:
            config = Config.load()
        except Exception as e:
            return f"ERROR: deepseek-mcp not configured: {e}"
    
        full_task = task
        if context:
            full_task = f"{task}\n\n# Additional context\n{context}"
    
        logger.info("delegate_to_deepseek invoked. Task length=%d, context length=%d", len(task), len(context))
    
        try:
            result = run_agent(full_task, config)
        except AgentLoopError as e:
            logger.exception("Agent loop failed")
            return f"ERROR: DeepSeek agent loop failed: {e}"
        except Exception as e:
            logger.exception("Unexpected error during delegation")
            return f"ERROR: unexpected failure: {e}"
    
        logger.info(
            "delegate_to_deepseek done. turns=%d tool_calls=%d tokens=%d duration=%.2fs",
            result["turns_used"],
            result["tool_calls"],
            result["tokens"]["total"],
            result["duration_seconds"],
        )
    
        # 用量记录(人类可读追加到 usage.log)
        # 注意:只记 task 前 60 字符摘要,不记 context(context 可能含项目敏感细节)
        try:
            # 简单大小控制:>10MB 时轮转一次(rename 为 .1)
            if _USAGE_LOG.exists() and _USAGE_LOG.stat().st_size > 10 * 1024 * 1024:
                try:
                    _USAGE_LOG.replace(_USAGE_LOG.with_suffix(".log.1"))
                except OSError:
                    pass
            with open(_USAGE_LOG, "a", encoding="utf-8") as f:
                f.write(
                    f"{result['duration_seconds']:.1f}s  "
                    f"turns={result['turns_used']:>2}  "
                    f"tools={result['tool_calls']:>2}  "
                    f"tokens={result['tokens']['total']:>6}  "
                    f"task={task[:60]!r}\n"
                )
            try:
                os.chmod(_USAGE_LOG, 0o600)
            except OSError:
                pass
        except Exception:
            pass  # 日志失败不影响主流程
    
        return (
            f"{result['final_message']}\n\n"
            f"---\n"
            f"[deepseek-mcp] {result['turns_used']} turns, "
            f"{result['tool_calls']} tool calls, "
            f"{result['tokens']['total']} tokens, "
            f"{result['duration_seconds']}s"
        )
  • The `run_agent()` helper function invoked by the handler. It builds the system prompt with tools/workspace, runs the DeepSeek chat completion loop with tool call execution, and returns results.
    def run_agent(task: str, config: Config) -> dict:
        """跑完整 agent loop。
    
        返回 dict:
          - final_message: str (DeepSeek 给的最终答复)
          - turns_used: int
          - tokens: {prompt, completion, total}
          - tool_calls: int
          - duration_seconds: float
        """
        client = OpenAI(api_key=config.api_key, base_url=config.base_url)
        tools = build_tool_schemas(config.allowed_tools)
    
        system_prompt = SYSTEM_PROMPT_TEMPLATE.format(
            tools=", ".join(config.allowed_tools),
            workspace=config.workspace,
        )
    
        messages: list[dict] = [
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": task},
        ]
    
        total_prompt_tokens = 0
        total_completion_tokens = 0
        tool_call_count = 0
        started = time.time()
    
        for turn in range(config.max_turns):
            response = _call_with_retry(client, config, messages, tools, turn)
    
            usage = response.usage
            if usage:
                total_prompt_tokens += usage.prompt_tokens
                total_completion_tokens += usage.completion_tokens
    
            msg = response.choices[0].message
    
            # 用 raw dict 保留所有字段,包括 DeepSeek v4-pro thinking mode 的 reasoning_content
            # —— 它要求下一轮必须把 reasoning_content 也回传,否则 400 报错
            raw = response.model_dump(exclude_none=True)
            msg_dict = raw["choices"][0]["message"]
            messages.append(msg_dict)
    
            # 没有 tool_calls 说明 DeepSeek 决定结束
            if not msg.tool_calls:
                return {
                    "final_message": msg.content or "(empty response)",
                    "turns_used": turn + 1,
                    "tokens": {
                        "prompt": total_prompt_tokens,
                        "completion": total_completion_tokens,
                        "total": total_prompt_tokens + total_completion_tokens,
                    },
                    "tool_calls": tool_call_count,
                    "duration_seconds": round(time.time() - started, 2),
                }
    
            # 依次执行 tool calls
            for tc in msg.tool_calls:
                tool_call_count += 1
                tool_name = tc.function.name
                try:
                    args = json.loads(tc.function.arguments)
                except json.JSONDecodeError as e:
                    result = f"ERROR: invalid JSON in tool arguments: {e}"
                else:
                    logger.info(
                        "Turn %d tool_call: %s(%s)",
                        turn,
                        tool_name,
                        _redact_args_for_log(args),
                    )
                    result = execute_tool(tool_name, args, config.workspace)
    
                messages.append(
                    {
                        "role": "tool",
                        "tool_call_id": tc.id,
                        "content": result,
                    }
                )
    
        # 跑到 max_turns 没收敛 —— 只展示最后一条 assistant content,不夹带完整 tool_calls blob
        last_text = ""
        for m in reversed(messages):
            if m.get("role") == "assistant" and m.get("content"):
                last_text = str(m["content"])[:500]
                break
        raise AgentLoopError(
            f"Agent loop exceeded max_turns ({config.max_turns}). "
            f"Last assistant text: {last_text or '(none)'}"
        )
  • The tool's input schema is defined implicitly via the function signature: `task: str` (required) and `context: str = ""` (optional). The docstring serves as the description for both the tool and its parameters (FastMCP convention).
    def delegate_to_deepseek(task: str, context: str = "") -> str:
        """Delegate a focused task to DeepSeek as a real sub-agent.
    
        DeepSeek runs its own agent loop with Read/Write/Edit/Bash/Glob/Grep/NotebookEdit tools
        inside the configured workspace. Use this for batch / repetitive / mechanical
        tasks where you want to save main-conversation tokens and let DeepSeek do the
        heavy lifting end-to-end.
    
        Good fits:
          - Extract i18n keys from N files into JSON
          - Translate large chunks of text
          - Scan logs for patterns
          - Bulk refactors with a clear pattern
          - One-off ETL scripts
    
        Bad fits (do it yourself instead):
          - Architectural design / cross-file judgment
          - Bug root-cause analysis
          - Tasks requiring project-specific idioms from CLAUDE.md or other repo conventions
    
        Args:
            task: Clear description of what DeepSeek should accomplish, including
                  success criteria and file paths involved.
            context: Optional additional context — project conventions, related
                     files DeepSeek should consider, output format requirements.
                     Include this when project-specific knowledge matters.
    
        Returns:
            A summary of what DeepSeek did, including files affected, turns used,
            tokens consumed, and any issues. Always verify the result by reading
            a sample of the affected files before declaring success to the user.
        """
        mode = os.getenv("DEEPSEEK_MODE", "auto")
        if mode == "off":
            return (
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes that DeepSeek runs its own agent loop with specific tools, saves main-conversation tokens, and returns a summary including files, turns, tokens, and issues. It also advises verifying results. Could mention potential failures or permissions but overall transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections and bullet points for good/bad fits, front-loading the main purpose. Every sentence adds value, though the list of tools (Read/Write/Edit/Bash/etc.) could be slightly trimmed but is still informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool and that an output schema exists (though not shown), the description covers parameters, use cases, and return value. Sibling is only 'ping', so no confusion. Could mention edge cases or error handling, but adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema coverage, the description provides detailed semantics for both parameters: 'task' is a clear description with success criteria and file paths; 'context' is optional additional context for project-specific knowledge. This significantly adds value beyond the schema property names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool delegates a task to DeepSeek as a sub-agent, listing its capabilities (Read/Write/Edit/Bash/etc.) and specifying the scope of tasks (batch/repetitive/mechanical). It distinguishes itself from the only sibling tool 'ping' which is a simple health check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent usage guidelines: explicitly lists 'Good fits' (e.g., extract i18n keys, translate, scan logs, bulk refactors) and 'Bad fits' (architectural design, bug analysis, tasks needing project-specific idioms), advising the agent to 'do it yourself instead' for bad fits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PsChina/deepseek-as-subagent'

If you have feedback or need assistance with the MCP directory API, please join our Discord server