grok_code
Generate code or get programming guidance by providing tasks, language hints, and context. This tool helps developers write, debug, and understand code through AI assistance.
Instructions
Ask Grok for code or code-related guidance. You can provide a language hint and context (e.g., file snippets or requirements). Returns assistant text by default.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| task | Yes | ||
| language | No | ||
| context | No | ||
| model | No | ||
| raw_output | No | ||
| timeout_s | No |
Implementation Reference
- src/grok_cli_mcp/server.py:129-136 (registration)Registration of the grok_code tool using @server.tool decorator, defining name, title, and description.@server.tool( name="grok_code", title="Grok Code Task", description=( "Ask Grok for code or code-related guidance. You can provide a language hint and context " "(e.g., file snippets or requirements). Returns assistant text by default." ), )
- src/grok_cli_mcp/server.py:137-190 (handler)The handler function for grok_code tool. Builds a code-specific prompt with system instructions, language, and context, then calls _run_grok and processes the output.async def grok_code( task: str, language: Optional[str] = None, context: Optional[str] = None, model: Optional[str] = None, raw_output: bool = False, timeout_s: float = 180.0, ctx: Optional[Context] = None, ) -> str | dict: """ Ask Grok for code generation or guidance. Args: task: Description of what code/help you need. language: Optional language hint (e.g., 'python', 'typescript'). context: Optional context (repo constraints, file snippets, tests, etc.). model: Optional Grok model name. raw_output: If true, returns structured output. timeout_s: Process timeout in seconds. ctx: FastMCP context. Returns: Assistant's text response, or dict with full details if raw_output=True. """ sys_instructions = [ "You are an expert software engineer.", "Respond with clear, correct, directly usable code and concise explanations.", "Prefer minimal dependencies and explain tradeoffs when relevant.", ] if language: sys_instructions.append(f"Primary language: {language}") if context: sys_instructions.append("Context:\n" + context.strip()) prompt = "\n\n".join( [ "\n".join(sys_instructions), "Task:", task.strip(), ] ) result = await _run_grok(prompt, model=model, timeout_s=timeout_s, ctx=ctx) assistant_text = _collect_assistant_text(result.messages) if result.messages else (result.raw or "") if raw_output: return { "text": assistant_text, "messages": [m.model_dump() for m in result.messages], "raw": result.raw, "model": result.model, } return assistant_text
- src/grok_cli_mcp/utils.py:147-244 (helper)Key helper function _run_grok that executes the Grok CLI binary with the given prompt and model, handles subprocess, timeout, parsing of JSON output into GrokMessages, and error handling.async def _run_grok( prompt: str, *, model: Optional[str], timeout_s: float, ctx: Optional[Context] = None, ) -> GrokParsedOutput: """ Run Grok CLI in headless mode: `grok -p "<prompt>" [-m <model>]` Parse JSON output and return a structured response. Args: prompt: The prompt to send to Grok. model: Optional Grok model name (passed with -m if provided). timeout_s: Process timeout in seconds. ctx: Optional FastMCP context for logging. Returns: GrokParsedOutput with messages, model, and raw output. Raises: FileNotFoundError: If Grok CLI binary not found. TimeoutError: If CLI execution exceeds timeout. RuntimeError: If CLI exits with non-zero code. """ grok_bin = _resolve_grok_path() if not shutil.which(grok_bin) and not os.path.exists(grok_bin): raise FileNotFoundError( f"Grok CLI not found. Checked {grok_bin} and PATH. " f"Set {ENV_GROK_CLI_PATH} or install grok CLI." ) _require_api_key() args = [grok_bin, "-p", prompt] if model: # Only pass -m if caller supplied a model; if CLI rejects, the error will be caught args += ["-m", model] env = os.environ.copy() # Ensure GROK_API_KEY is present in the subprocess environment env[ENV_GROK_API_KEY] = env[ENV_GROK_API_KEY] if ctx: await ctx.info(f"Invoking Grok CLI {'with model ' + model if model else ''}...") proc = await asyncio.create_subprocess_exec( *args, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, env=env ) try: stdout_b, stderr_b = await asyncio.wait_for(proc.communicate(), timeout=timeout_s) except asyncio.TimeoutError: try: proc.kill() except Exception: pass raise TimeoutError(f"Grok CLI timed out after {timeout_s:.0f}s") stdout = (stdout_b or b"").decode("utf-8", errors="replace") stderr = (stderr_b or b"").decode("utf-8", errors="replace") if proc.returncode != 0: # Grok CLI error; include stderr to help debugging raise RuntimeError(f"Grok CLI failed (exit {proc.returncode}): {stderr.strip() or stdout.strip()}") # Parse JSON payload parsed: Any try: parsed = _extract_json_from_text(stdout) except Exception as e: # If JSON parse fails, provide raw output in a structured wrapper if ctx: await ctx.warning(f"Failed to parse Grok JSON output: {e}. Returning raw output.") return GrokParsedOutput(messages=[], model=model, raw=stdout) # Normalize to list of GrokMessage messages: list[GrokMessage] = [] if isinstance(parsed, dict) and "role" in parsed and "content" in parsed: messages = [GrokMessage(**parsed)] elif isinstance(parsed, list): # Either a list of messages or a list with one message for item in parsed: if isinstance(item, dict) and "role" in item and "content" in item: messages.append(GrokMessage(**item)) elif isinstance(parsed, dict) and "messages" in parsed: for item in parsed.get("messages", []) or []: if isinstance(item, dict) and "role" in item and "content" in item: messages.append(GrokMessage(**item)) else: # Unknown shape: keep raw and empty messages if ctx: await ctx.warning("Unrecognized JSON shape from Grok CLI. Returning raw output.") return GrokParsedOutput(messages=[], model=model, raw=stdout) return GrokParsedOutput(messages=messages, model=model, raw=stdout)
- src/grok_cli_mcp/utils.py:105-144 (helper)Helper function to extract and concatenate text content from assistant role messages in the parsed Grok output.def _collect_assistant_text(messages: Sequence[GrokMessage]) -> str: """ Collate assistant message text from a sequence of messages. Handles: - content as a plain string - content as a list of blocks with 'type'=='text' - content as a dict with 'text' field Args: messages: Sequence of GrokMessage objects. Returns: Concatenated text from all assistant messages. """ chunks: list[str] = [] for m in messages: if m.role != "assistant": continue c = m.content if isinstance(c, str): chunks.append(c) elif isinstance(c, list): for block in c: try: if isinstance(block, dict) and block.get("type") == "text" and "text" in block: chunks.append(str(block["text"])) elif isinstance(block, dict) and "content" in block: chunks.append(str(block["content"])) except Exception: continue elif isinstance(c, dict) and "text" in c: chunks.append(str(c["text"])) else: # Fallback: stringify structured content try: chunks.append(json.dumps(c, ensure_ascii=False)) except Exception: chunks.append(str(c)) return "\n".join([s for s in (s.strip() for s in chunks) if s])