Skip to main content
Glama

ping

Check if the Luma AI video and image generation API is operational to verify service availability before creating content.

Instructions

Check if the Luma API is running

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the 'ping' tool. It makes a GET request to the Luma API's /ping endpoint to check availability and returns a status message.
    async def ping(parameters: dict) -> str: """Check if the Luma API is running.""" try: await _make_luma_request("GET", "/ping") return "Luma API is available and responding" except Exception as e: logger.error(f"Error in ping: {str(e)}", exc_info=True) return f"Error pinging Luma API: {str(e)}"
  • Pydantic input schema for the ping tool. It is empty since the ping tool requires no parameters.
    class PingInput(BaseModel): pass
  • Tool registration in the list_tools() function, specifying the name, description, and input schema for the ping tool.
    Tool( name=LumaTools.PING, description="Check if the Luma API is running", inputSchema=PingInput.model_json_schema(), ),
  • Dispatch logic in the call_tool() function that routes calls to the ping handler and formats the response.
    case LumaTools.PING: result = await ping(arguments) return [TextContent(type="text", text=result)]
  • Enum value defining the string name 'ping' for the LumaTools.PING constant used in registrations.
    PING = "ping"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bobtista/luma-ai-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server