Skip to main content
Glama
unixlamadev-spec

lightningprox-mcp

chat

Send messages to AI models from Anthropic, OpenAI, Google, Mistral, and Together.ai using Bitcoin Lightning payments. Pay per request with prepaid spend tokens without accounts or API keys.

Instructions

Send a message to an AI model via LightningProx. Pay per request with a Lightning spend token. Supports 19 models from Anthropic, OpenAI, Together.ai, Mistral, and Google.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesModel ID (e.g. claude-opus-4-5-20251101, gpt-4-turbo, gemini-2.5-pro, mistral-large-latest, deepseek-ai/DeepSeek-V3)
messageYesThe user message to send
spend_tokenYesLightningProx spend token (starts with lnpx_). Get one at lightningprox.com/topup
max_tokensNoMaximum tokens in response (default: 1024)

Implementation Reference

  • The handler logic for the 'chat' tool inside the MCP server request handler.
    case "chat": {
      const { model, message, spend_token, max_tokens } = args as any;
      const result = await chat(model, message, spend_token, max_tokens);
    
      const content =
        result.content?.[0]?.text ||
        result.choices?.[0]?.message?.content ||
        JSON.stringify(result);
    
      const usage = result.usage
        ? `\n\n— ${result.usage.input_tokens ?? result.usage.prompt_tokens ?? "?"} in / ${result.usage.output_tokens ?? result.usage.completion_tokens ?? "?"} out`
        : "";
    
      return {
        content: [{ type: "text", text: content + usage }],
      };
    }
  • The helper function that performs the network request to the AI model API.
    async function chat(
      model: string,
      message: string,
      spendToken: string,
      maxTokens: number = 1024
    ): Promise<any> {
      const res = await fetch(`${LIGHTNINGPROX_URL}/v1/messages`, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          "X-Spend-Token": spendToken,
        },
        body: JSON.stringify({
          model,
          messages: [{ role: "user", content: message }],
          max_tokens: maxTokens,
        }),
      });
      if (!res.ok) {
        const err = await res.json() as any;
        throw new Error(err.error || `LightningProx error: ${res.status}`);
      }
      return res.json();
    }
  • Tool definition and schema for the 'chat' tool.
    {
      name: "chat",
      description:
        "Send a message to an AI model via LightningProx. Pay per request with a Lightning spend token. Supports 19 models from Anthropic, OpenAI, Together.ai, Mistral, and Google.",
      inputSchema: {
        type: "object",
        properties: {
          model: {
            type: "string",
            description:
              "Model ID (e.g. claude-opus-4-5-20251101, gpt-4-turbo, gemini-2.5-pro, mistral-large-latest, deepseek-ai/DeepSeek-V3)",
          },
          message: {
            type: "string",
            description: "The user message to send",
          },
          spend_token: {
            type: "string",
            description: "LightningProx spend token (starts with lnpx_). Get one at lightningprox.com/topup",
          },
          max_tokens: {
            type: "number",
            description: "Maximum tokens in response (default: 1024)",
          },
        },
        required: ["model", "message", "spend_token"],
      },
    },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/unixlamadev-spec/lightningprox-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server