Skip to main content
Glama
git-fabric

@git-fabric/chat

Official
by git-fabric

chat_message_send

Send a message in an existing chat session to get a Claude response. Maintains full conversation history, stores both user and assistant messages, and returns the assistant reply with token usage details.

Instructions

Send a message in an existing session and get a Claude response. Reconstructs full conversation history for the API call. Stores both user message and assistant response. Returns the assistant reply with token usage.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sessionIdYesUUID of the session to send the message in.
contentYesMessage content from the user.
maxTokensNoMaximum tokens in the response. Default: 8192.

Implementation Reference

  • The sendMessage function implements the core logic for chat_message_send. It loads the session, stores the user message, embeds it in Qdrant, builds conversation history, calls the Anthropic API completion, stores the assistant response, and returns the result with token usage.
    export async function sendMessage(
      adapter: ChatAdapter,
      sessionId: string,
      content: string,
      maxTokens = 8192,
    ): Promise<SendResult> {
      // Load session with current history
      const session = await adapter.getSession(sessionId);
      if (session.state === "archived") {
        throw new Error(
          `Session ${sessionId} is archived. Resume it or create a new session.`,
        );
      }
    
      // Store the user message first
      const userMsg = await adapter.addMessage({
        sessionId,
        role: "user",
        content,
      });
    
      // Embed and store user message in Qdrant (best-effort)
      try {
        await adapter.embedAndStore(userMsg);
      } catch {
        // Non-fatal: semantic search degrades gracefully
      }
    
      // Build message history for Anthropic
      // Include all prior messages + the new user message
      const history: CompletionMessage[] = [
        ...session.messages
          .filter((m) => m.role === "user" || m.role === "assistant")
          .map((m) => ({ role: m.role as "user" | "assistant", content: m.content })),
        { role: "user", content },
      ];
    
      // Complete
      const result = await adapter.complete(history, {
        model: session.model,
        systemPrompt: session.systemPrompt,
        maxTokens,
      });
    
      // Store assistant response
      const assistantMsg = await adapter.addMessage({
        sessionId,
        role: "assistant",
        content: result.content,
        model: result.model,
        inputTokens: result.inputTokens,
        outputTokens: result.outputTokens,
      });
    
      // Embed and store assistant message (best-effort)
      try {
        await adapter.embedAndStore(assistantMsg);
      } catch {
        // Non-fatal
      }
    
      return {
        messageId: assistantMsg.id,
        role: "assistant",
        content: result.content,
        inputTokens: result.inputTokens,
        outputTokens: result.outputTokens,
        model: result.model,
      };
    }
  • src/app.ts:176-204 (registration)
    Tool registration for chat_message_send, including name, description, input schema (sessionId, content, maxTokens), and the execute handler that delegates to layers.messages.sendMessage.
      name: "chat_message_send",
      description:
        "Send a message in an existing session and get a Claude response. Reconstructs full conversation history for the API call. Stores both user message and assistant response. Returns the assistant reply with token usage.",
      inputSchema: {
        type: "object",
        properties: {
          sessionId: {
            type: "string",
            description: "UUID of the session to send the message in.",
          },
          content: {
            type: "string",
            description: "Message content from the user.",
          },
          maxTokens: {
            type: "number",
            description: "Maximum tokens in the response. Default: 8192.",
          },
        },
        required: ["sessionId", "content"],
      },
      execute: async (args) =>
        layers.messages.sendMessage(
          adapter,
          args.sessionId as string,
          args.content as string,
          args.maxTokens as number | undefined,
        ),
    },
  • SendResult interface defines the output schema for chat_message_send, including messageId, role, content, inputTokens, outputTokens, and model fields.
    export interface SendResult {
      messageId: string;
      role: "assistant";
      content: string;
      inputTokens: number;
      outputTokens: number;
      model: ChatModel;
    }
  • Input schema for chat_message_send tool, defining required parameters (sessionId, content) and optional maxTokens with their types and descriptions.
    inputSchema: {
      type: "object",
      properties: {
        sessionId: {
          type: "string",
          description: "UUID of the session to send the message in.",
        },
        content: {
          type: "string",
          description: "Message content from the user.",
        },
        maxTokens: {
          type: "number",
          description: "Maximum tokens in the response. Default: 8192.",
        },
      },
      required: ["sessionId", "content"],
    },
  • ChatAdapter interface defines the contract that the sendMessage handler uses, including getSession, addMessage, embedAndStore, and complete methods for interacting with storage and the Anthropic API.
    export interface ChatAdapter {
      // Session CRUD
      createSession(opts: {
        systemPrompt?: string;
        project?: string;
        model?: ChatModel;
        title?: string;
      }): Promise<ChatSession>;
    
      listSessions(opts: {
        project?: string;
        limit: number;
        state: "active" | "archived" | "all";
      }): Promise<ChatSession[]>;
    
      getSession(sessionId: string): Promise<ChatSession & { messages: ChatMessage[] }>;
    
      updateSession(
        sessionId: string,
        patch: Partial<Pick<ChatSession, "state" | "title">>,
      ): Promise<ChatSession>;
    
      deleteSession(sessionId: string): Promise<void>;
    
      // Messages
      getMessages(
        sessionId: string,
        limit: number,
        offset: number,
      ): Promise<ChatMessage[]>;
    
      addMessage(msg: Omit<ChatMessage, "id" | "timestamp">): Promise<ChatMessage>;
    
      // LLM
      complete(
        messages: CompletionMessage[],
        opts: { model: ChatModel; systemPrompt?: string; maxTokens: number },
      ): Promise<CompletionResult>;
    
      // Semantic search
      embedAndStore(message: ChatMessage): Promise<void>;
      searchMessages(
        query: number[],
        opts: { project?: string; sessionId?: string; limit: number },
      ): Promise<SearchResult[]>;
      embed(text: string): Promise<number[]>;
    
      // Stats / health
      getStats(): Promise<ChatStats>;
      health(): Promise<ChatHealth>;
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behaviors: reconstructs conversation history, stores both user and assistant messages, and returns token usage. However, it doesn't mention rate limits, authentication requirements, error conditions, or whether this is a read-only or mutating operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place. The first sentence states the core action and context, while the second explains key implementation details and return values. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, but no annotations and no output schema, the description is adequate but has gaps. It explains the core functionality well but doesn't address error handling, authentication, or provide examples of the return format beyond mentioning 'assistant reply with token usage'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds minimal value beyond what's in the schema - it mentions 'message content' which aligns with the content parameter, but doesn't provide additional context about parameter interactions or usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Send a message'), the resource ('in an existing session'), and the outcome ('get a Claude response'). It distinguishes from siblings like chat_session_create (creates new sessions) and chat_message_list (lists messages) by focusing on message sending within existing sessions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'in an existing session' and mentioning conversation history reconstruction, which suggests this is for ongoing conversations rather than starting new ones. However, it doesn't explicitly state when NOT to use this tool or name alternatives like chat_session_create for new conversations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/git-fabric/chat'

If you have feedback or need assistance with the MCP directory API, please join our Discord server