Skip to main content
Glama

chat_via_lucairn

Send chat requests through Lucairn's privacy gateway. PII is detected and replaced before reaching the upstream LLM, with cross-provider routing to Anthropic or OpenAI.

Instructions

Send a chat request through the Lucairn privacy gateway with cross-provider BYOK (Anthropic + OpenAI). PII is detected and replaced with placeholders before reaching the upstream LLM. The gateway picks the upstream provider based on the model parameter: claude-* / anthropic-* use ANTHROPIC_API_KEY; gpt-* / openai-* / o1-* / o3-* / o4-* use OPENAI_API_KEY. Wire format follows the Anthropic Messages API. Developer-tier responses contain raw placeholders; Pro and Enterprise tiers can enable automatic re-linking back to the original values.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesModel identifier. Routing rules: `claude-*` and `anthropic-*` route to Anthropic via ANTHROPIC_API_KEY; `gpt-*`, `openai-*`, `o1-*`, `o3-*`, and `o4-*` route to OpenAI via OPENAI_API_KEY. Examples: `claude-sonnet-4-6`, `gpt-4o-mini`, `o3-mini`. Set one or both of ANTHROPIC_API_KEY and OPENAI_API_KEY in your MCP client env for BYOK; matching is case-insensitive.
max_tokensYesMaximum tokens to generate in the response. Required by the Anthropic Messages API.
messagesYesConversation messages. Each item is { role: "user" | "assistant", content: string | array }.
systemNoOptional system prompt. May be a string or an array of content blocks. Sanitization policy is per-API-key on the gateway side (sanitize or passthrough_audit).
temperatureNoOptional sampling temperature (0..1).

Implementation Reference

  • The CallToolRequestSchema handler that executes the chat_via_lucairn tool logic. It validates the tool name, checks the input size cap, parses arguments via parseChatToolArgs, calls GatewayClient.sendMessage, and formats the result via formatToolResult.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params
      if (name !== CHAT_TOOL_NAME) {
        return {
          isError: true,
          content: [
            { type: 'text', text: `Unknown tool: ${name}` },
          ],
        }
      }
      // Soft input cap (TOB-005): a malicious or buggy MCP client could
      // hand a 100MB messages[] and we'd JSON.stringify it before any
      // network shipping. Bound local memory at 1 MiB and surface a
      // structured tool error instead of crashing the process.
      if (exceedsInputCap(args)) {
        return {
          isError: true,
          content: [
            {
              type: 'text',
              text: `Tool input exceeds max size (${MAX_INPUT_BYTES} bytes). Reduce messages[] or system prompt size.`,
            },
          ],
        }
      }
      let parsed: ChatToolInput
      try {
        parsed = parseChatToolArgs(args)
      } catch (err) {
        const msg = err instanceof Error ? err.message : String(err)
        return {
          isError: true,
          content: [{ type: 'text', text: msg }],
        }
      }
      try {
        const resp = await client.sendMessage(parsed)
        return formatToolResult(resp)
      } catch (err) {
        if (err instanceof GatewayError) {
          return gatewayErrorToToolResult(err)
        }
        const msg = err instanceof Error ? err.message : String(err)
        return {
          isError: true,
          content: [{ type: 'text', text: `Internal error: ${msg}` }],
        }
      }
    })
    
    return server
  • Formats the gateway's Anthropic response into the MCP CallToolResult shape (collapses content blocks, appends compliance metadata trailer).
    export function formatToolResult(resp: AnthropicResponseBody): {
      content: Array<{ type: 'text'; text: string }>
    } {
      const text = resp.content
        .filter((b) => b.type === 'text' && typeof b.text === 'string')
        .map((b) => b.text)
        .join('')
    
      const compliance = resp.metadata?.dsa_compliance
      const trailer =
        compliance && compliance.veil_summary_url
          ? `\n\n_Lucairn certificate: ${compliance.veil_summary_url}_`
          : ''
    
      return {
        content: [{ type: 'text', text: text + trailer }],
      }
    }
  • Validates and narrows raw MCP tool arguments into a ChatToolInput object. Called by the handler before forwarding to the gateway.
    export function parseChatToolArgs(raw: unknown): ChatToolInput {
      if (!raw || typeof raw !== 'object') {
        throw new Error('Tool arguments must be an object.')
      }
      const args = raw as Record<string, unknown>
      if (typeof args.model !== 'string' || args.model.length === 0) {
        throw new Error('Tool argument `model` is required and must be a string.')
      }
      if (typeof args.max_tokens !== 'number' || args.max_tokens <= 0) {
        throw new Error(
          'Tool argument `max_tokens` is required and must be a positive number.',
        )
      }
      if (!Array.isArray(args.messages) || args.messages.length === 0) {
        throw new Error(
          'Tool argument `messages` is required and must be a non-empty array.',
        )
      }
      // Pass through; deeper validation lives on the gateway side
      // (anthropicRequest unmarshal + validate at mcp_handler.go:84-99).
      return args as unknown as ChatToolInput
    }
  • ChatToolInput interface — the input type accepted by the chat_via_lucairn tool.
    /** Inputs accepted by the chat_via_lucairn MCP tool. */
    export interface ChatToolInput {
      /** Required. Anthropic model identifier (e.g. "claude-sonnet-4-6"). */
      model: string
      /** Required. Maximum tokens to generate in the response. */
      max_tokens: number
      /** Required. Conversation messages array. */
      messages: AnthropicMessage[]
      /** Optional. System prompt — may be a string or an array of content blocks. */
      system?: string | Array<{ type: string; text: string; [k: string]: unknown }>
      /** Optional. Sampling temperature (0..1). */
      temperature?: number
    }
  • Static MCP tool descriptor that defines the tool name ('chat_via_lucairn'), description, and input JSON schema. Listed in the ListToolsResponse.
    /** Static MCP tool descriptor for chat_via_lucairn. */
    export const CHAT_TOOL_DESCRIPTOR = {
      name: CHAT_TOOL_NAME,
      description:
        'Send an Anthropic Messages API request through the Lucairn ' +
        'privacy gateway. PII is detected and replaced with placeholders ' +
        'before reaching the upstream LLM. Developer-tier responses ' +
        'contain those placeholders; Pro and Enterprise tiers can enable ' +
        'automatic re-linking back to the original values.',
      inputSchema: {
        type: 'object',
        properties: {
          model: {
            type: 'string',
            description:
              'Anthropic model identifier (e.g. "claude-sonnet-4-6").',
          },
          max_tokens: {
            type: 'number',
            description:
              'Maximum tokens to generate in the response. Required by ' +
              'the Anthropic Messages API.',
          },
          messages: {
            type: 'array',
            description:
              'Conversation messages. Each item is { role: "user" | "assistant", content: string | array }.',
            items: {
              type: 'object',
              properties: {
                role: { type: 'string', enum: ['user', 'assistant'] },
                content: {},
              },
              required: ['role', 'content'],
            },
          },
          system: {
            description:
              'Optional system prompt. May be a string or an array of ' +
              'content blocks. Sanitization policy is per-API-key on the ' +
              'gateway side (sanitize or passthrough_audit).',
          },
          temperature: {
            type: 'number',
            description: 'Optional sampling temperature (0..1).',
          },
        },
        required: ['model', 'max_tokens', 'messages'],
      },
    } as const
  • ListToolsRequestSchema handler that registers the chat_via_lucairn tool in the tool catalog.
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return { tools: [CHAT_TOOL_DESCRIPTOR] }
    })
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully bears the burden of behavioral disclosure. It reveals PII replacement, provider routing, wire format (Anthropic Messages API), and tier-dependent placeholder handling. It could mention rate limits or error handling, but current coverage is strong.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense without fluff. Every sentence adds value, covering key aspects in a logical order. It could be slightly shorter, but it remains concise for the complexity involved.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and sibling tools, the description covers routing, PII detection, tier behavior, and wire format comprehensively. It is complete enough for an agent to understand and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Since schema coverage is 100%, the baseline is 3. The description adds significant meaning beyond the schema by explaining model routing rules, BYOK environment variables, system prompt sanitization policy, and the requirement for max_tokens. This justifies a score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool's function: sending a chat request through the Lucairn privacy gateway with BYOK and PII detection. It distinguishes itself from any sibling tools (none exist) by detailing its unique privacy and cross-provider routing features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (chat requests requiring privacy and BYOK) and provides detailed routing rules based on the model parameter. However, it does not explicitly state when not to use it or mention alternatives, as there are no siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Declade/lucairn-sdks'

If you have feedback or need assistance with the MCP directory API, please join our Discord server