Skip to main content
Glama
unixlamadev-spec

lightningprox-mcp

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
chat

Send a message to an AI model via LightningProx. Pay per request with a Lightning spend token. Supports 19 models from Anthropic, OpenAI, Together.ai, Mistral, and Google.

list_models

List all AI models available through LightningProx. Returns model IDs, names, providers, and pricing. 19 models across Anthropic, OpenAI, Together.ai, Mistral, and Google.

get_balance

Check the remaining balance on a LightningProx spend token. Returns balance in sats.

generate_invoice

Generate a Bitcoin Lightning invoice to top up a LightningProx spend token. Returns a BOLT11 payment request and charge ID. Pay the invoice with any Lightning wallet.

check_payment

Check if a Lightning invoice has been paid and retrieve the spend token. Poll this after generate_invoice until the payment is confirmed.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/unixlamadev-spec/lightningprox-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server