Skip to main content
Glama
unixlamadev-spec

lightningprox-mcp

chat

Send messages to AI models from Anthropic, OpenAI, Google, Mistral, and Together.ai using Bitcoin Lightning payments. Pay per request with prepaid spend tokens without accounts or API keys.

Instructions

Send a message to an AI model via LightningProx. Pay per request with a Lightning spend token. Supports 19 models from Anthropic, OpenAI, Together.ai, Mistral, and Google.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesModel ID (e.g. claude-opus-4-5-20251101, gpt-4-turbo, gemini-2.5-pro, mistral-large-latest, deepseek-ai/DeepSeek-V3)
messageYesThe user message to send
spend_tokenYesLightningProx spend token (starts with lnpx_). Get one at lightningprox.com/topup
max_tokensNoMaximum tokens in response (default: 1024)

Implementation Reference

  • The handler logic for the 'chat' tool inside the MCP server request handler.
    case "chat": {
      const { model, message, spend_token, max_tokens } = args as any;
      const result = await chat(model, message, spend_token, max_tokens);
    
      const content =
        result.content?.[0]?.text ||
        result.choices?.[0]?.message?.content ||
        JSON.stringify(result);
    
      const usage = result.usage
        ? `\n\n— ${result.usage.input_tokens ?? result.usage.prompt_tokens ?? "?"} in / ${result.usage.output_tokens ?? result.usage.completion_tokens ?? "?"} out`
        : "";
    
      return {
        content: [{ type: "text", text: content + usage }],
      };
    }
  • The helper function that performs the network request to the AI model API.
    async function chat(
      model: string,
      message: string,
      spendToken: string,
      maxTokens: number = 1024
    ): Promise<any> {
      const res = await fetch(`${LIGHTNINGPROX_URL}/v1/messages`, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          "X-Spend-Token": spendToken,
        },
        body: JSON.stringify({
          model,
          messages: [{ role: "user", content: message }],
          max_tokens: maxTokens,
        }),
      });
      if (!res.ok) {
        const err = await res.json() as any;
        throw new Error(err.error || `LightningProx error: ${res.status}`);
      }
      return res.json();
    }
  • Tool definition and schema for the 'chat' tool.
    {
      name: "chat",
      description:
        "Send a message to an AI model via LightningProx. Pay per request with a Lightning spend token. Supports 19 models from Anthropic, OpenAI, Together.ai, Mistral, and Google.",
      inputSchema: {
        type: "object",
        properties: {
          model: {
            type: "string",
            description:
              "Model ID (e.g. claude-opus-4-5-20251101, gpt-4-turbo, gemini-2.5-pro, mistral-large-latest, deepseek-ai/DeepSeek-V3)",
          },
          message: {
            type: "string",
            description: "The user message to send",
          },
          spend_token: {
            type: "string",
            description: "LightningProx spend token (starts with lnpx_). Get one at lightningprox.com/topup",
          },
          max_tokens: {
            type: "number",
            description: "Maximum tokens in response (default: 1024)",
          },
        },
        required: ["model", "message", "spend_token"],
      },
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the payment requirement ('Pay per request with a Lightning spend token') which is crucial context, but doesn't describe rate limits, authentication needs beyond the token, error behavior, response format, or whether this is a read/write operation. For a tool with financial implications and no annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that efficiently convey core functionality and key constraints. It's front-loaded with the primary purpose, though the second sentence could be slightly more structured. Every sentence earns its place by providing essential information about the service and model support.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a paid API call tool with financial implications and no output schema, the description is incomplete. It doesn't explain what the tool returns (response format, error handling), doesn't mention cost implications beyond requiring a token, and provides no guidance on model selection or performance characteristics. For a tool with this complexity and no structured output documentation, the description should do more.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all four parameters thoroughly. The description adds minimal value beyond what the schema provides - it mentions the payment token requirement and model providers, but doesn't provide additional parameter semantics, constraints, or usage examples beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Send a message to an AI model via LightningProx'), identifies the resource (AI models from multiple providers), and distinguishes this tool from its siblings (payment/invoice/balance/model-listing tools) by focusing on message sending rather than payment management or metadata retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('Pay per request with a Lightning spend token') but doesn't explicitly state when to use this tool versus alternatives. It mentions the tool supports 19 models from specific providers, which provides some contextual guidance, but lacks explicit 'when-not-to-use' instructions or named alternatives for similar functions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/unixlamadev-spec/lightningprox-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server