stream_answer
Query Exa's search API to receive streaming AI-generated answers, with support for custom system prompts and structured output schemas.
Instructions
Generate a streaming answer response using Exa.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The query to answer. | |
| text | No | Whether to include full text in the results. | |
| system_prompt | No | A system prompt to guide the LLM's behavior. | |
| model | No | The model to use for answering. | |
| output_schema | No | JSON schema for structured output. | |
| user_location | No | Two-letter ISO country code for user location. |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |
Implementation Reference
- src/mcp_exa/_server.py:341-386 (handler)The MCP tool handler for stream_answer. Defined with @mcp.tool() decorator, it accepts query, text, system_prompt, model, output_schema, and user_location parameters. It builds arguments dict and delegates to _call_mcp_tool with tool name 'exa_stream_answer' which calls the public Exa MCP endpoint at https://mcp.exa.ai/mcp via JSON-RPC.
@mcp.tool() async def stream_answer( query: str, text: bool | None = None, system_prompt: str | None = None, model: Literal["exa"] | None = None, output_schema: JSONSchemaInput | None = None, user_location: str | None = None, ) -> list[dict[str, Any]]: """Generate a streaming answer response using Exa. Args: query: The query to answer. text: Whether to include full text in the results. system_prompt: A system prompt to guide the LLM's behavior. model: The model to use for answering. output_schema: JSON schema for structured output. user_location: Two-letter ISO country code for user location. Returns: List of dicts containing partial answers and citations. Example: >>> await stream_answer("What is the capital of France?") [{"content": "Paris", "citations": [...]}] """ if not query: raise ValueError("Query cannot be empty") arguments: dict[str, Any] = {"query": query} if text is not None: arguments["text"] = text if system_prompt is not None: arguments["system_prompt"] = system_prompt if model is not None: arguments["model"] = model if output_schema is not None: arguments["output_schema"] = output_schema if user_location is not None: arguments["user_location"] = user_location try: result = await _call_mcp_tool("exa_stream_answer", arguments) return [result] except Exception as e: return [{"error": str(e)}] - src/mcp_exa/_server.py:342-349 (schema)Input schema for stream_answer tool: query (str, required), text (bool, optional), system_prompt (str, optional), model (Literal['exa'], optional), output_schema (dict, optional), user_location (str, optional). Returns list of dicts.
async def stream_answer( query: str, text: bool | None = None, system_prompt: str | None = None, model: Literal["exa"] | None = None, output_schema: JSONSchemaInput | None = None, user_location: str | None = None, ) -> list[dict[str, Any]]: - src/mcp_exa/_server.py:341-341 (registration)The @mcp.tool() decorator registers stream_answer as an MCP tool on the FastMCP server instance 'mcp'.
@mcp.tool() - src/mcp_exa/_server.py:30-69 (helper)Helper function _call_mcp_tool that makes JSON-RPC calls to the public Exa MCP server (https://mcp.exa.ai/mcp). It sends a tools/call request with the tool name and arguments, parses SSE-style responses, and extracts text content from the result.
async def _call_mcp_tool(tool_name: str, arguments: dict[str, Any]) -> dict[str, Any]: """Call a tool on the public Exa MCP server.""" request = { "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": tool_name, "arguments": arguments, }, } async with httpx.AsyncClient(timeout=60.0) as client: response = await client.post( f"{BASE_URL}/mcp", json=request, headers={ "accept": "application/json, text/event-stream", "content-type": "application/json", }, ) response.raise_for_status() response_text = response.text lines = response_text.split("\n") for line in lines: if line.startswith("data: "): data = line[6:] result = {"jsonrpc": "2.0", "id": 1, "result": {}} try: parsed = eval(data) except Exception: pass else: if "result" in parsed and parsed["result"].get("content"): return { "results": parsed["result"]["content"][0].get("text", "") } return {"results": ""}