Skip to main content
Glama

ping

Verify the operational status of the Luma API by sending a ping request. This tool ensures the API is active and ready for video and image processing tasks.

Instructions

Check if the Luma API is running

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the 'ping' tool. It makes a GET request to the Luma API's /ping endpoint and returns a status message indicating availability.
    async def ping(parameters: dict) -> str: """Check if the Luma API is running.""" try: await _make_luma_request("GET", "/ping") return "Luma API is available and responding" except Exception as e: logger.error(f"Error in ping: {str(e)}", exc_info=True) return f"Error pinging Luma API: {str(e)}"
  • Pydantic BaseModel defining the input schema for the ping tool, which requires no parameters.
    class PingInput(BaseModel): pass
  • Registration of the 'ping' tool in the server's list_tools() method, specifying name, description, and input schema.
    Tool( name=LumaTools.PING, description="Check if the Luma API is running", inputSchema=PingInput.model_json_schema(), ),
  • Dispatch logic in the server's call_tool() method that routes 'ping' tool calls to the ping handler function.
    case LumaTools.PING: result = await ping(arguments) return [TextContent(type="text", text=result)]
  • Enum value defining the tool name constant 'PING = "ping"' used in registrations.
    PING = "ping"

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bobtista/luma-ai-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server