ping
Check if the Luma AI video and image generation API is operational to verify service availability before creating content.
Instructions
Check if the Luma API is running
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/luma_ai_mcp_server/server.py:246-253 (handler)The main handler function for the 'ping' tool. It makes a GET request to the Luma API's /ping endpoint to check availability and returns a status message.
async def ping(parameters: dict) -> str: """Check if the Luma API is running.""" try: await _make_luma_request("GET", "/ping") return "Luma API is available and responding" except Exception as e: logger.error(f"Error in ping: {str(e)}", exc_info=True) return f"Error pinging Luma API: {str(e)}" - Pydantic input schema for the ping tool. It is empty since the ping tool requires no parameters.
class PingInput(BaseModel): pass - src/luma_ai_mcp_server/server.py:498-502 (registration)Tool registration in the list_tools() function, specifying the name, description, and input schema for the ping tool.
Tool( name=LumaTools.PING, description="Check if the Luma API is running", inputSchema=PingInput.model_json_schema(), ), - src/luma_ai_mcp_server/server.py:555-557 (registration)Dispatch logic in the call_tool() function that routes calls to the ping handler and formats the response.
case LumaTools.PING: result = await ping(arguments) return [TextContent(type="text", text=result)] - Enum value defining the string name 'ping' for the LumaTools.PING constant used in registrations.
PING = "ping"