Skip to main content
Glama

Chat with Another LLM Model

chat

Send messages to LLMs for help, brainstorming, or second opinions. Start new conversations, continue existing ones, or switch models while maintaining context.

Instructions

Send a message to an available LLM for help, second opinions, or brainstorming; start new conversations, continue existing ones, or switch models mid-chat. In the first message you shall provide as much context as possible, since the model has no idea of the problem.

Example workflow:

  1. chat(message: "hello", modelId: "gpt-5-mini") → conversationId: "abc1"

  2. chat(message: "follow-up", conversationId: "abc1") → conversationId: "abc1" (continues)

  3. chat(message: "same question", conversationId: "abc1", modelId: "deepseek-r1") → conversationId: "xyz9" (cloned with new model)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageYesThe question or request to send—be clear and specific.
conversationIdNoID of the conversation to continue; omit to start a new one. Use the conversationId from prior responses to keep discussing the same topic.
modelIdNoID of model to use (call list_models); omitted = default model. To switch models, pass a different modelId with your conversationId — you'll get a new conversationId with the conversation cloned to the new model.
reasoningNoSet true to have the model show its reasoning steps, useful for complex problems.

Implementation Reference

  • The core handler logic for the 'chat' tool: determines or creates conversation, selects model/client, adds user message, calls OpenAI chat, stores assistant response, returns JSON with conversationId, response, reasoning, modelId.
    async ({ message, conversationId, modelId, reasoning }) => {
      try {
        logger.debug("Chat tool called", {
          hasConversationId: !!conversationId,
          modelId,
        });
    
        let actualConversationId: string;
        let actualModelId: string;
    
        // Determine conversation ID and model ID
        if (conversationId) {
          const existing = conversationManager.getConversation(conversationId);
          if (!existing) {
            return {
              content: [
                {
                  type: "text" as const,
                  text: `Error: Conversation not found: ${conversationId}`,
                },
              ],
            };
          }
    
          if (modelId && modelId !== existing.modelId) {
            // Clone conversation with new model
            actualConversationId = conversationManager.cloneConversation(conversationId, modelId);
            actualModelId = modelId;
          } else {
            // Continue existing conversation
            actualConversationId = conversationId;
            actualModelId = existing.modelId;
          }
        } else {
          // Create new conversation
          actualModelId = modelId || config.models[0].id;
          actualConversationId = conversationManager.createConversation(actualModelId);
        }
    
        // Validate model exists and get its full model name
        const client = openaiClients.get(actualModelId);
        if (!client) {
          return {
            content: [
              {
                type: "text" as const,
                text: `Error: Model not configured: ${actualModelId}`,
              },
            ],
          };
        }
    
        // Get the full model name from config for API calls
        const modelConfig = config.models.find((m) => m.id === actualModelId);
        if (!modelConfig) {
          return {
            content: [
              {
                type: "text" as const,
                text: `Error: Model configuration not found: ${actualModelId}`,
              },
            ],
          };
        }
    
        // Add user message to conversation
        conversationManager.addMessage(actualConversationId, "user", message);
    
        // Get conversation history
        const history = conversationManager.getHistory(actualConversationId);
    
        // Send to OpenAI - use the full modelName for the API call
        const response = await client.chat(modelConfig.modelName, history, {
          reasoning,
          provider: modelConfig.provider,
        });
    
        // Add assistant response to conversation
        conversationManager.addMessage(actualConversationId, "assistant", response.content);
    
        logger.info("Chat completed", {
          conversationId: actualConversationId,
          modelId: actualModelId,
        });
    
        return {
          content: [
            {
              type: "text" as const,
              text: JSON.stringify({
                conversationId: actualConversationId,
                response: response.content,
                reasoning: response.reasoning,
                modelId: actualModelId,
              }),
            },
          ],
        };
      } catch (error) {
        const errorMessage = error instanceof Error ? error.message : String(error);
        logger.error("Chat tool error", error instanceof Error ? error : new Error(errorMessage));
    
        return {
          content: [
            {
              type: "text" as const,
              text: `Error: ${errorMessage}`,
            },
          ],
        };
      }
    }
  • Tool schema definition including title, detailed description with usage examples, and Zod-validated input schema for 'chat' tool parameters.
    {
      title: "Chat with Another LLM Model",
      description:
        'Send a message to an available LLM for help, second opinions, or brainstorming; start new conversations, continue existing ones, or switch models mid-chat. In the first message you shall provide as much context as possible, since the model has no idea of the problem.\n\nExample workflow:\n1. chat(message: "hello", modelId: "gpt-5-mini") → conversationId: "abc1"\n2. chat(message: "follow-up", conversationId: "abc1") → conversationId: "abc1" (continues)\n3. chat(message: "same question", conversationId: "abc1", modelId: "deepseek-r1") → conversationId: "xyz9" (cloned with new model)',
      inputSchema: z.object({
        message: z.string().describe("The question or request to send—be clear and specific."),
        conversationId: z
          .string()
          .optional()
          .describe(
            "ID of the conversation to continue; omit to start a new one. Use the conversationId from prior responses to keep discussing the same topic."
          ),
        modelId: z
          .string()
          .optional()
          .describe(
            "ID of model to use (call list_models); omitted = default model. To switch models, pass a different modelId with your conversationId — you'll get a new conversationId with the conversation cloned to the new model."
          ),
        reasoning: z
          .boolean()
          .optional()
          .describe(
            "Set true to have the model show its reasoning steps, useful for complex problems."
          ),
      }),
    },
  • Registers the 'chat' tool on the MCP server using server.registerTool with name, schema, and handler function.
    server.registerTool(
      "chat",
      {
        title: "Chat with Another LLM Model",
        description:
          'Send a message to an available LLM for help, second opinions, or brainstorming; start new conversations, continue existing ones, or switch models mid-chat. In the first message you shall provide as much context as possible, since the model has no idea of the problem.\n\nExample workflow:\n1. chat(message: "hello", modelId: "gpt-5-mini") → conversationId: "abc1"\n2. chat(message: "follow-up", conversationId: "abc1") → conversationId: "abc1" (continues)\n3. chat(message: "same question", conversationId: "abc1", modelId: "deepseek-r1") → conversationId: "xyz9" (cloned with new model)',
        inputSchema: z.object({
          message: z.string().describe("The question or request to send—be clear and specific."),
          conversationId: z
            .string()
            .optional()
            .describe(
              "ID of the conversation to continue; omit to start a new one. Use the conversationId from prior responses to keep discussing the same topic."
            ),
          modelId: z
            .string()
            .optional()
            .describe(
              "ID of model to use (call list_models); omitted = default model. To switch models, pass a different modelId with your conversationId — you'll get a new conversationId with the conversation cloned to the new model."
            ),
          reasoning: z
            .boolean()
            .optional()
            .describe(
              "Set true to have the model show its reasoning steps, useful for complex problems."
            ),
        }),
      },
      async ({ message, conversationId, modelId, reasoning }) => {
        try {
          logger.debug("Chat tool called", {
            hasConversationId: !!conversationId,
            modelId,
          });
    
          let actualConversationId: string;
          let actualModelId: string;
    
          // Determine conversation ID and model ID
          if (conversationId) {
            const existing = conversationManager.getConversation(conversationId);
            if (!existing) {
              return {
                content: [
                  {
                    type: "text" as const,
                    text: `Error: Conversation not found: ${conversationId}`,
                  },
                ],
              };
            }
    
            if (modelId && modelId !== existing.modelId) {
              // Clone conversation with new model
              actualConversationId = conversationManager.cloneConversation(conversationId, modelId);
              actualModelId = modelId;
            } else {
              // Continue existing conversation
              actualConversationId = conversationId;
              actualModelId = existing.modelId;
            }
          } else {
            // Create new conversation
            actualModelId = modelId || config.models[0].id;
            actualConversationId = conversationManager.createConversation(actualModelId);
          }
    
          // Validate model exists and get its full model name
          const client = openaiClients.get(actualModelId);
          if (!client) {
            return {
              content: [
                {
                  type: "text" as const,
                  text: `Error: Model not configured: ${actualModelId}`,
                },
              ],
            };
          }
    
          // Get the full model name from config for API calls
          const modelConfig = config.models.find((m) => m.id === actualModelId);
          if (!modelConfig) {
            return {
              content: [
                {
                  type: "text" as const,
                  text: `Error: Model configuration not found: ${actualModelId}`,
                },
              ],
            };
          }
    
          // Add user message to conversation
          conversationManager.addMessage(actualConversationId, "user", message);
    
          // Get conversation history
          const history = conversationManager.getHistory(actualConversationId);
    
          // Send to OpenAI - use the full modelName for the API call
          const response = await client.chat(modelConfig.modelName, history, {
            reasoning,
            provider: modelConfig.provider,
          });
    
          // Add assistant response to conversation
          conversationManager.addMessage(actualConversationId, "assistant", response.content);
    
          logger.info("Chat completed", {
            conversationId: actualConversationId,
            modelId: actualModelId,
          });
    
          return {
            content: [
              {
                type: "text" as const,
                text: JSON.stringify({
                  conversationId: actualConversationId,
                  response: response.content,
                  reasoning: response.reasoning,
                  modelId: actualModelId,
                }),
              },
            ],
          };
        } catch (error) {
          const errorMessage = error instanceof Error ? error.message : String(error);
          logger.error("Chat tool error", error instanceof Error ? error : new Error(errorMessage));
    
          return {
            content: [
              {
                type: "text" as const,
                text: `Error: ${errorMessage}`,
              },
            ],
          };
        }
      }
    );
  • TypeScript interface defining the ChatRequest type, matching the Zod schema for chat tool inputs.
    export interface ChatRequest {
      message: string;
      conversationId?: string;
      modelId?: string;
      reasoning?: boolean;
    }
  • OpenAIClient.chat method called by the chat tool handler to perform the actual LLM chat completion request.
    async chat(
      modelId: string,
      messages: ChatMessage[],
      options?: ChatOptions
    ): Promise<{ content: string; reasoning?: string }> {
      try {
        logger.debug("Sending chat request to OpenAI", {
          model: modelId,
          messageCount: messages.length,
          reasoning: options?.reasoning,
        });
    
        const openaiMessages = messages.map((msg) => ({
          role: msg.role as "user" | "assistant",
          content: msg.content,
        }));
    
        const client = this.getClient();
    
        // Build provider-specific reasoning params
        const reasoningParams = this.buildReasoningParams(
          options?.provider,
          options?.reasoning ?? false
        );
    
        const response = await client.chat.completions.create({
          model: modelId,
          messages: openaiMessages,
          ...reasoningParams,
        });
    
        const textContent = response.choices[0]?.message?.content;
        if (!textContent) {
          throw new Error("No content in response from OpenAI");
        }
    
        logger.debug("Received response from OpenAI", {
          model: modelId,
          tokens: response.usage?.total_tokens,
        });
    
        // Extract reasoning if present and requested
        let reasoning: string | undefined;
        if (options?.reasoning) {
          reasoning = this.extractReasoning(
            response.choices[0].message as unknown as Record<string, unknown>
          );
        }
    
        return {
          content: textContent,
          reasoning,
        };
      } catch (error) {
        const errorMessage = error instanceof Error ? error.message : String(error);
        logger.error("OpenAI API error", {
          model: modelId,
          error: errorMessage,
        });
    
        throw new Error(`Failed to get response from OpenAI: ${errorMessage}`);
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the need to provide context in first messages, conversation persistence via conversationId, model switching capabilities, and cloning behavior when changing models. It doesn't cover rate limits or error handling, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose, followed by a helpful example workflow. Every sentence adds value, though the example is somewhat lengthy. The structure effectively communicates both the 'what' and 'how' without unnecessary repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, conversation management, model switching) and no annotations or output schema, the description provides substantial context about behavior, usage patterns, and workflow. It could benefit from mentioning response format or error cases, but covers the essential operational aspects well for a chat tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema—it mentions providing context in the first message and shows parameter usage in the example workflow, but doesn't significantly enhance understanding of individual parameters beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('send a message to an available LLM') and resources ('LLM model'), and distinguishes it from sibling tools by focusing on interactive chat rather than listing models or accessing history. It explicitly mentions the core functions: getting help, second opinions, brainstorming, and managing conversations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it mentions starting new conversations, continuing existing ones, or switching models mid-chat, and references sibling tools like 'list_models' for model selection. The example workflow demonstrates practical scenarios and transitions between different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/danielwpz/polybrain-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server