chat_via_lucairn
Send chat requests through Lucairn's privacy gateway. PII is detected and replaced before reaching the upstream LLM, with cross-provider routing to Anthropic or OpenAI.
Instructions
Send a chat request through the Lucairn privacy gateway with cross-provider BYOK (Anthropic + OpenAI). PII is detected and replaced with placeholders before reaching the upstream LLM. The gateway picks the upstream provider based on the model parameter: claude-* / anthropic-* use ANTHROPIC_API_KEY; gpt-* / openai-* / o1-* / o3-* / o4-* use OPENAI_API_KEY. Wire format follows the Anthropic Messages API. Developer-tier responses contain raw placeholders; Pro and Enterprise tiers can enable automatic re-linking back to the original values.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | Model identifier. Routing rules: `claude-*` and `anthropic-*` route to Anthropic via ANTHROPIC_API_KEY; `gpt-*`, `openai-*`, `o1-*`, `o3-*`, and `o4-*` route to OpenAI via OPENAI_API_KEY. Examples: `claude-sonnet-4-6`, `gpt-4o-mini`, `o3-mini`. Set one or both of ANTHROPIC_API_KEY and OPENAI_API_KEY in your MCP client env for BYOK; matching is case-insensitive. | |
| max_tokens | Yes | Maximum tokens to generate in the response. Required by the Anthropic Messages API. | |
| messages | Yes | Conversation messages. Each item is { role: "user" | "assistant", content: string | array }. | |
| system | No | Optional system prompt. May be a string or an array of content blocks. Sanitization policy is per-API-key on the gateway side (sanitize or passthrough_audit). | |
| temperature | No | Optional sampling temperature (0..1). |
Implementation Reference
- mcp-server/src/server.ts:211-261 (handler)The CallToolRequestSchema handler that executes the chat_via_lucairn tool logic. It validates the tool name, checks the input size cap, parses arguments via parseChatToolArgs, calls GatewayClient.sendMessage, and formats the result via formatToolResult.
server.setRequestHandler(CallToolRequestSchema, async (request) => { const { name, arguments: args } = request.params if (name !== CHAT_TOOL_NAME) { return { isError: true, content: [ { type: 'text', text: `Unknown tool: ${name}` }, ], } } // Soft input cap (TOB-005): a malicious or buggy MCP client could // hand a 100MB messages[] and we'd JSON.stringify it before any // network shipping. Bound local memory at 1 MiB and surface a // structured tool error instead of crashing the process. if (exceedsInputCap(args)) { return { isError: true, content: [ { type: 'text', text: `Tool input exceeds max size (${MAX_INPUT_BYTES} bytes). Reduce messages[] or system prompt size.`, }, ], } } let parsed: ChatToolInput try { parsed = parseChatToolArgs(args) } catch (err) { const msg = err instanceof Error ? err.message : String(err) return { isError: true, content: [{ type: 'text', text: msg }], } } try { const resp = await client.sendMessage(parsed) return formatToolResult(resp) } catch (err) { if (err instanceof GatewayError) { return gatewayErrorToToolResult(err) } const msg = err instanceof Error ? err.message : String(err) return { isError: true, content: [{ type: 'text', text: `Internal error: ${msg}` }], } } }) return server - mcp-server/src/server.ts:145-162 (handler)Formats the gateway's Anthropic response into the MCP CallToolResult shape (collapses content blocks, appends compliance metadata trailer).
export function formatToolResult(resp: AnthropicResponseBody): { content: Array<{ type: 'text'; text: string }> } { const text = resp.content .filter((b) => b.type === 'text' && typeof b.text === 'string') .map((b) => b.text) .join('') const compliance = resp.metadata?.dsa_compliance const trailer = compliance && compliance.veil_summary_url ? `\n\n_Lucairn certificate: ${compliance.veil_summary_url}_` : '' return { content: [{ type: 'text', text: text + trailer }], } } - mcp-server/src/server.ts:117-138 (handler)Validates and narrows raw MCP tool arguments into a ChatToolInput object. Called by the handler before forwarding to the gateway.
export function parseChatToolArgs(raw: unknown): ChatToolInput { if (!raw || typeof raw !== 'object') { throw new Error('Tool arguments must be an object.') } const args = raw as Record<string, unknown> if (typeof args.model !== 'string' || args.model.length === 0) { throw new Error('Tool argument `model` is required and must be a string.') } if (typeof args.max_tokens !== 'number' || args.max_tokens <= 0) { throw new Error( 'Tool argument `max_tokens` is required and must be a positive number.', ) } if (!Array.isArray(args.messages) || args.messages.length === 0) { throw new Error( 'Tool argument `messages` is required and must be a non-empty array.', ) } // Pass through; deeper validation lives on the gateway side // (anthropicRequest unmarshal + validate at mcp_handler.go:84-99). return args as unknown as ChatToolInput } - mcp-server/src/types.ts:32-44 (schema)ChatToolInput interface — the input type accepted by the chat_via_lucairn tool.
/** Inputs accepted by the chat_via_lucairn MCP tool. */ export interface ChatToolInput { /** Required. Anthropic model identifier (e.g. "claude-sonnet-4-6"). */ model: string /** Required. Maximum tokens to generate in the response. */ max_tokens: number /** Required. Conversation messages array. */ messages: AnthropicMessage[] /** Optional. System prompt — may be a string or an array of content blocks. */ system?: string | Array<{ type: string; text: string; [k: string]: unknown }> /** Optional. Sampling temperature (0..1). */ temperature?: number } - mcp-server/src/server.ts:61-110 (registration)Static MCP tool descriptor that defines the tool name ('chat_via_lucairn'), description, and input JSON schema. Listed in the ListToolsResponse.
/** Static MCP tool descriptor for chat_via_lucairn. */ export const CHAT_TOOL_DESCRIPTOR = { name: CHAT_TOOL_NAME, description: 'Send an Anthropic Messages API request through the Lucairn ' + 'privacy gateway. PII is detected and replaced with placeholders ' + 'before reaching the upstream LLM. Developer-tier responses ' + 'contain those placeholders; Pro and Enterprise tiers can enable ' + 'automatic re-linking back to the original values.', inputSchema: { type: 'object', properties: { model: { type: 'string', description: 'Anthropic model identifier (e.g. "claude-sonnet-4-6").', }, max_tokens: { type: 'number', description: 'Maximum tokens to generate in the response. Required by ' + 'the Anthropic Messages API.', }, messages: { type: 'array', description: 'Conversation messages. Each item is { role: "user" | "assistant", content: string | array }.', items: { type: 'object', properties: { role: { type: 'string', enum: ['user', 'assistant'] }, content: {}, }, required: ['role', 'content'], }, }, system: { description: 'Optional system prompt. May be a string or an array of ' + 'content blocks. Sanitization policy is per-API-key on the ' + 'gateway side (sanitize or passthrough_audit).', }, temperature: { type: 'number', description: 'Optional sampling temperature (0..1).', }, }, required: ['model', 'max_tokens', 'messages'], }, } as const - mcp-server/src/server.ts:207-209 (registration)ListToolsRequestSchema handler that registers the chat_via_lucairn tool in the tool catalog.
server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: [CHAT_TOOL_DESCRIPTOR] } })