chat_with_files
Analyze and query uploaded files through conversational AI to extract insights, answer questions, and process document content directly within chat interactions.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | ||
| session | No | ||
| model | No | grok-4-1-fast-reasoning | |
| file_ids | No | ||
| system_prompt | No |
Implementation Reference
- src/server.py:593-630 (handler)The implementation of the `chat_with_files` tool, which uses an xAI client to process a prompt with file attachments and maintains chat history.
@mcp.tool() async def chat_with_files( prompt: str, session: Optional[str] = None, model: str = "grok-4-1-fast-reasoning", file_ids: List[str] = None, system_prompt: Optional[str] = None ): history = load_history(session) if session else [] client = Client(api_key=XAI_API_KEY) chat = client.chat.create(model=model) if system_prompt: chat.append(system(system_prompt)) for message in history: if message["role"] == "user": chat.append(user(message["content"])) elif message["role"] == "assistant": chat.append(assistant(message["content"])) file_attachments = [file(fid) for fid in file_ids] chat.append(user(prompt, *file_attachments)) response = chat.sample() client.close() if session: history.append({"role": "user", "content": prompt, "time": datetime.now().strftime("%d.%m.%Y %H:%M:%S")}) history.append({"role": "assistant", "content": response.content, "time": datetime.now().strftime("%d.%m.%Y %H:%M:%S")}) save_history(session, history) result = [response.content] if response.citations: result.append("\n\n**Sources:**") for url in response.citations: result.append(f"- {url}") return "\n".join(result)