chat
Send messages to LLMs for help, brainstorming, or second opinions. Start new conversations, continue existing ones, or switch models while maintaining context.
Instructions
Send a message to an available LLM for help, second opinions, or brainstorming; start new conversations, continue existing ones, or switch models mid-chat. In the first message you shall provide as much context as possible, since the model has no idea of the problem.
Example workflow:
chat(message: "hello", modelId: "gpt-5-mini") → conversationId: "abc1"
chat(message: "follow-up", conversationId: "abc1") → conversationId: "abc1" (continues)
chat(message: "same question", conversationId: "abc1", modelId: "deepseek-r1") → conversationId: "xyz9" (cloned with new model)
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | The question or request to send—be clear and specific. | |
| conversationId | No | ID of the conversation to continue; omit to start a new one. Use the conversationId from prior responses to keep discussing the same topic. | |
| modelId | No | ID of model to use (call list_models); omitted = default model. To switch models, pass a different modelId with your conversationId — you'll get a new conversationId with the conversation cloned to the new model. | |
| reasoning | No | Set true to have the model show its reasoning steps, useful for complex problems. |
Implementation Reference
- src/mcp-tools.ts:43-154 (handler)The core handler logic for the 'chat' tool: determines or creates conversation, selects model/client, adds user message, calls OpenAI chat, stores assistant response, returns JSON with conversationId, response, reasoning, modelId.async ({ message, conversationId, modelId, reasoning }) => { try { logger.debug("Chat tool called", { hasConversationId: !!conversationId, modelId, }); let actualConversationId: string; let actualModelId: string; // Determine conversation ID and model ID if (conversationId) { const existing = conversationManager.getConversation(conversationId); if (!existing) { return { content: [ { type: "text" as const, text: `Error: Conversation not found: ${conversationId}`, }, ], }; } if (modelId && modelId !== existing.modelId) { // Clone conversation with new model actualConversationId = conversationManager.cloneConversation(conversationId, modelId); actualModelId = modelId; } else { // Continue existing conversation actualConversationId = conversationId; actualModelId = existing.modelId; } } else { // Create new conversation actualModelId = modelId || config.models[0].id; actualConversationId = conversationManager.createConversation(actualModelId); } // Validate model exists and get its full model name const client = openaiClients.get(actualModelId); if (!client) { return { content: [ { type: "text" as const, text: `Error: Model not configured: ${actualModelId}`, }, ], }; } // Get the full model name from config for API calls const modelConfig = config.models.find((m) => m.id === actualModelId); if (!modelConfig) { return { content: [ { type: "text" as const, text: `Error: Model configuration not found: ${actualModelId}`, }, ], }; } // Add user message to conversation conversationManager.addMessage(actualConversationId, "user", message); // Get conversation history const history = conversationManager.getHistory(actualConversationId); // Send to OpenAI - use the full modelName for the API call const response = await client.chat(modelConfig.modelName, history, { reasoning, provider: modelConfig.provider, }); // Add assistant response to conversation conversationManager.addMessage(actualConversationId, "assistant", response.content); logger.info("Chat completed", { conversationId: actualConversationId, modelId: actualModelId, }); return { content: [ { type: "text" as const, text: JSON.stringify({ conversationId: actualConversationId, response: response.content, reasoning: response.reasoning, modelId: actualModelId, }), }, ], }; } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); logger.error("Chat tool error", error instanceof Error ? error : new Error(errorMessage)); return { content: [ { type: "text" as const, text: `Error: ${errorMessage}`, }, ], }; } }
- src/mcp-tools.ts:17-42 (schema)Tool schema definition including title, detailed description with usage examples, and Zod-validated input schema for 'chat' tool parameters.{ title: "Chat with Another LLM Model", description: 'Send a message to an available LLM for help, second opinions, or brainstorming; start new conversations, continue existing ones, or switch models mid-chat. In the first message you shall provide as much context as possible, since the model has no idea of the problem.\n\nExample workflow:\n1. chat(message: "hello", modelId: "gpt-5-mini") → conversationId: "abc1"\n2. chat(message: "follow-up", conversationId: "abc1") → conversationId: "abc1" (continues)\n3. chat(message: "same question", conversationId: "abc1", modelId: "deepseek-r1") → conversationId: "xyz9" (cloned with new model)', inputSchema: z.object({ message: z.string().describe("The question or request to send—be clear and specific."), conversationId: z .string() .optional() .describe( "ID of the conversation to continue; omit to start a new one. Use the conversationId from prior responses to keep discussing the same topic." ), modelId: z .string() .optional() .describe( "ID of model to use (call list_models); omitted = default model. To switch models, pass a different modelId with your conversationId — you'll get a new conversationId with the conversation cloned to the new model." ), reasoning: z .boolean() .optional() .describe( "Set true to have the model show its reasoning steps, useful for complex problems." ), }), },
- src/mcp-tools.ts:15-155 (registration)Registers the 'chat' tool on the MCP server using server.registerTool with name, schema, and handler function.server.registerTool( "chat", { title: "Chat with Another LLM Model", description: 'Send a message to an available LLM for help, second opinions, or brainstorming; start new conversations, continue existing ones, or switch models mid-chat. In the first message you shall provide as much context as possible, since the model has no idea of the problem.\n\nExample workflow:\n1. chat(message: "hello", modelId: "gpt-5-mini") → conversationId: "abc1"\n2. chat(message: "follow-up", conversationId: "abc1") → conversationId: "abc1" (continues)\n3. chat(message: "same question", conversationId: "abc1", modelId: "deepseek-r1") → conversationId: "xyz9" (cloned with new model)', inputSchema: z.object({ message: z.string().describe("The question or request to send—be clear and specific."), conversationId: z .string() .optional() .describe( "ID of the conversation to continue; omit to start a new one. Use the conversationId from prior responses to keep discussing the same topic." ), modelId: z .string() .optional() .describe( "ID of model to use (call list_models); omitted = default model. To switch models, pass a different modelId with your conversationId — you'll get a new conversationId with the conversation cloned to the new model." ), reasoning: z .boolean() .optional() .describe( "Set true to have the model show its reasoning steps, useful for complex problems." ), }), }, async ({ message, conversationId, modelId, reasoning }) => { try { logger.debug("Chat tool called", { hasConversationId: !!conversationId, modelId, }); let actualConversationId: string; let actualModelId: string; // Determine conversation ID and model ID if (conversationId) { const existing = conversationManager.getConversation(conversationId); if (!existing) { return { content: [ { type: "text" as const, text: `Error: Conversation not found: ${conversationId}`, }, ], }; } if (modelId && modelId !== existing.modelId) { // Clone conversation with new model actualConversationId = conversationManager.cloneConversation(conversationId, modelId); actualModelId = modelId; } else { // Continue existing conversation actualConversationId = conversationId; actualModelId = existing.modelId; } } else { // Create new conversation actualModelId = modelId || config.models[0].id; actualConversationId = conversationManager.createConversation(actualModelId); } // Validate model exists and get its full model name const client = openaiClients.get(actualModelId); if (!client) { return { content: [ { type: "text" as const, text: `Error: Model not configured: ${actualModelId}`, }, ], }; } // Get the full model name from config for API calls const modelConfig = config.models.find((m) => m.id === actualModelId); if (!modelConfig) { return { content: [ { type: "text" as const, text: `Error: Model configuration not found: ${actualModelId}`, }, ], }; } // Add user message to conversation conversationManager.addMessage(actualConversationId, "user", message); // Get conversation history const history = conversationManager.getHistory(actualConversationId); // Send to OpenAI - use the full modelName for the API call const response = await client.chat(modelConfig.modelName, history, { reasoning, provider: modelConfig.provider, }); // Add assistant response to conversation conversationManager.addMessage(actualConversationId, "assistant", response.content); logger.info("Chat completed", { conversationId: actualConversationId, modelId: actualModelId, }); return { content: [ { type: "text" as const, text: JSON.stringify({ conversationId: actualConversationId, response: response.content, reasoning: response.reasoning, modelId: actualModelId, }), }, ], }; } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); logger.error("Chat tool error", error instanceof Error ? error : new Error(errorMessage)); return { content: [ { type: "text" as const, text: `Error: ${errorMessage}`, }, ], }; } } );
- src/types.ts:33-38 (schema)TypeScript interface defining the ChatRequest type, matching the Zod schema for chat tool inputs.export interface ChatRequest { message: string; conversationId?: string; modelId?: string; reasoning?: boolean; }
- src/openai-client.ts:80-142 (helper)OpenAIClient.chat method called by the chat tool handler to perform the actual LLM chat completion request.async chat( modelId: string, messages: ChatMessage[], options?: ChatOptions ): Promise<{ content: string; reasoning?: string }> { try { logger.debug("Sending chat request to OpenAI", { model: modelId, messageCount: messages.length, reasoning: options?.reasoning, }); const openaiMessages = messages.map((msg) => ({ role: msg.role as "user" | "assistant", content: msg.content, })); const client = this.getClient(); // Build provider-specific reasoning params const reasoningParams = this.buildReasoningParams( options?.provider, options?.reasoning ?? false ); const response = await client.chat.completions.create({ model: modelId, messages: openaiMessages, ...reasoningParams, }); const textContent = response.choices[0]?.message?.content; if (!textContent) { throw new Error("No content in response from OpenAI"); } logger.debug("Received response from OpenAI", { model: modelId, tokens: response.usage?.total_tokens, }); // Extract reasoning if present and requested let reasoning: string | undefined; if (options?.reasoning) { reasoning = this.extractReasoning( response.choices[0].message as unknown as Record<string, unknown> ); } return { content: textContent, reasoning, }; } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); logger.error("OpenAI API error", { model: modelId, error: errorMessage, }); throw new Error(`Failed to get response from OpenAI: ${errorMessage}`); } }