Skip to main content
Glama
git-fabric

@git-fabric/chat

Official
by git-fabric

chat_message_send

Send a message in an existing chat session to get a Claude response. Maintains full conversation history, stores both user and assistant messages, and returns the assistant reply with token usage details.

Instructions

Send a message in an existing session and get a Claude response. Reconstructs full conversation history for the API call. Stores both user message and assistant response. Returns the assistant reply with token usage.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdYesUUID of the session to send the message in.
contentYesMessage content from the user.
maxTokensNoMaximum tokens in the response. Default: 8192.

Implementation Reference

  • The sendMessage function implements the core logic for chat_message_send. It loads the session, stores the user message, embeds it in Qdrant, builds conversation history, calls the Anthropic API completion, stores the assistant response, and returns the result with token usage.
    export async function sendMessage(
      adapter: ChatAdapter,
      sessionId: string,
      content: string,
      maxTokens = 8192,
    ): Promise<SendResult> {
      // Load session with current history
      const session = await adapter.getSession(sessionId);
      if (session.state === "archived") {
        throw new Error(
          `Session ${sessionId} is archived. Resume it or create a new session.`,
        );
      }
    
      // Store the user message first
      const userMsg = await adapter.addMessage({
        sessionId,
        role: "user",
        content,
      });
    
      // Embed and store user message in Qdrant (best-effort)
      try {
        await adapter.embedAndStore(userMsg);
      } catch {
        // Non-fatal: semantic search degrades gracefully
      }
    
      // Build message history for Anthropic
      // Include all prior messages + the new user message
      const history: CompletionMessage[] = [
        ...session.messages
          .filter((m) => m.role === "user" || m.role === "assistant")
          .map((m) => ({ role: m.role as "user" | "assistant", content: m.content })),
        { role: "user", content },
      ];
    
      // Complete
      const result = await adapter.complete(history, {
        model: session.model,
        systemPrompt: session.systemPrompt,
        maxTokens,
      });
    
      // Store assistant response
      const assistantMsg = await adapter.addMessage({
        sessionId,
        role: "assistant",
        content: result.content,
        model: result.model,
        inputTokens: result.inputTokens,
        outputTokens: result.outputTokens,
      });
    
      // Embed and store assistant message (best-effort)
      try {
        await adapter.embedAndStore(assistantMsg);
      } catch {
        // Non-fatal
      }
    
      return {
        messageId: assistantMsg.id,
        role: "assistant",
        content: result.content,
        inputTokens: result.inputTokens,
        outputTokens: result.outputTokens,
        model: result.model,
      };
    }
  • src/app.ts:176-204 (registration)
    Tool registration for chat_message_send, including name, description, input schema (sessionId, content, maxTokens), and the execute handler that delegates to layers.messages.sendMessage.
      name: "chat_message_send",
      description:
        "Send a message in an existing session and get a Claude response. Reconstructs full conversation history for the API call. Stores both user message and assistant response. Returns the assistant reply with token usage.",
      inputSchema: {
        type: "object",
        properties: {
          sessionId: {
            type: "string",
            description: "UUID of the session to send the message in.",
          },
          content: {
            type: "string",
            description: "Message content from the user.",
          },
          maxTokens: {
            type: "number",
            description: "Maximum tokens in the response. Default: 8192.",
          },
        },
        required: ["sessionId", "content"],
      },
      execute: async (args) =>
        layers.messages.sendMessage(
          adapter,
          args.sessionId as string,
          args.content as string,
          args.maxTokens as number | undefined,
        ),
    },
  • SendResult interface defines the output schema for chat_message_send, including messageId, role, content, inputTokens, outputTokens, and model fields.
    export interface SendResult {
      messageId: string;
      role: "assistant";
      content: string;
      inputTokens: number;
      outputTokens: number;
      model: ChatModel;
    }
  • Input schema for chat_message_send tool, defining required parameters (sessionId, content) and optional maxTokens with their types and descriptions.
    inputSchema: {
      type: "object",
      properties: {
        sessionId: {
          type: "string",
          description: "UUID of the session to send the message in.",
        },
        content: {
          type: "string",
          description: "Message content from the user.",
        },
        maxTokens: {
          type: "number",
          description: "Maximum tokens in the response. Default: 8192.",
        },
      },
      required: ["sessionId", "content"],
    },
  • ChatAdapter interface defines the contract that the sendMessage handler uses, including getSession, addMessage, embedAndStore, and complete methods for interacting with storage and the Anthropic API.
    export interface ChatAdapter {
      // Session CRUD
      createSession(opts: {
        systemPrompt?: string;
        project?: string;
        model?: ChatModel;
        title?: string;
      }): Promise<ChatSession>;
    
      listSessions(opts: {
        project?: string;
        limit: number;
        state: "active" | "archived" | "all";
      }): Promise<ChatSession[]>;
    
      getSession(sessionId: string): Promise<ChatSession & { messages: ChatMessage[] }>;
    
      updateSession(
        sessionId: string,
        patch: Partial<Pick<ChatSession, "state" | "title">>,
      ): Promise<ChatSession>;
    
      deleteSession(sessionId: string): Promise<void>;
    
      // Messages
      getMessages(
        sessionId: string,
        limit: number,
        offset: number,
      ): Promise<ChatMessage[]>;
    
      addMessage(msg: Omit<ChatMessage, "id" | "timestamp">): Promise<ChatMessage>;
    
      // LLM
      complete(
        messages: CompletionMessage[],
        opts: { model: ChatModel; systemPrompt?: string; maxTokens: number },
      ): Promise<CompletionResult>;
    
      // Semantic search
      embedAndStore(message: ChatMessage): Promise<void>;
      searchMessages(
        query: number[],
        opts: { project?: string; sessionId?: string; limit: number },
      ): Promise<SearchResult[]>;
      embed(text: string): Promise<number[]>;
    
      // Stats / health
      getStats(): Promise<ChatStats>;
      health(): Promise<ChatHealth>;
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/git-fabric/chat'

If you have feedback or need assistance with the MCP directory API, please join our Discord server