Skip to main content
Glama

get_usage

Retrieve usage and spending history for your Fal.ai workspace, showing quantity, cost, and breakdown by model with date filtering options.

Instructions

Get usage and spending history for your Fal.ai workspace. Shows quantity, cost, and breakdown by model. Requires admin API key.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
startNoStart date (YYYY-MM-DD format). Defaults to 7 days ago.
endNoEnd date (YYYY-MM-DD format). Defaults to today.
modelsNoFilter by specific model IDs/aliases (optional)

Implementation Reference

  • The handler function that implements the core logic for the 'get_usage' tool. It processes input arguments, resolves model IDs to endpoint IDs, fetches usage data from the model registry, handles errors, and formats the response as markdown text.
    async def handle_get_usage( arguments: Dict[str, Any], registry: ModelRegistry, ) -> List[TextContent]: """Handle the get_usage tool.""" # Parse dates today = datetime.now().date() start_str = arguments.get("start") or (today - timedelta(days=7)).isoformat() end_str = arguments.get("end") or today.isoformat() # Resolve endpoint filters if provided model_inputs = arguments.get("models", []) endpoint_ids = [] failed_models = [] if model_inputs: for model_input in model_inputs: try: endpoint_id = await registry.resolve_model_id(model_input) endpoint_ids.append(endpoint_id) except ValueError: failed_models.append(model_input) if failed_models: return [ TextContent( type="text", text=f"❌ Unknown model(s): {', '.join(failed_models)}. Use list_models to see available options.", ) ] # Fetch usage data try: usage_data = await registry.get_usage( start=start_str, end=end_str, endpoint_ids=endpoint_ids or None ) except httpx.HTTPStatusError as e: logger.error( "Usage API returned HTTP %d: %s", e.response.status_code, e, ) if e.response.status_code == 403: return [ TextContent( type="text", text="❌ Access denied. Your API key doesn't have permission to view usage data. Contact your workspace admin.", ) ] return [ TextContent( type="text", text=f"❌ Usage API error (HTTP {e.response.status_code})", ) ] except httpx.TimeoutException: return [ TextContent( type="text", text="❌ Usage request timed out. Please try again.", ) ] except httpx.ConnectError as e: logger.error("Cannot connect to usage API: %s", e) return [ TextContent( type="text", text="❌ Cannot connect to Fal.ai API. Check your network connection.", ) ] # Format output total_cost = usage_data.get("total_cost", 0) currency = usage_data.get("currency", "USD") breakdown = usage_data.get("breakdown", []) if currency == "USD": total_str = f"${total_cost:.2f}" else: total_str = f"{total_cost:.2f} {currency}" lines = [ f"## Usage Report: {start_str} to {end_str}\n", f"**Total Cost**: {total_str}\n", ] if breakdown: lines.append("### Breakdown by Model\n") for item in breakdown: endpoint_id = item.get("endpoint_id", "Unknown") quantity = item.get("quantity", 0) cost = item.get("cost", 0) if currency == "USD": cost_str = f"${cost:.2f}" else: cost_str = f"{cost:.2f} {currency}" lines.append(f"- **{endpoint_id}**: {quantity} requests, {cost_str}") return [TextContent(type="text", text="\n".join(lines))]
  • The JSON schema defining the input parameters for the 'get_usage' tool, including optional start/end dates and model filters.
    Tool( name="get_usage", description="Get usage and spending history for your Fal.ai workspace. Shows quantity, cost, and breakdown by model. Requires admin API key.", inputSchema={ "type": "object", "properties": { "start": { "type": "string", "description": "Start date (YYYY-MM-DD format). Defaults to 7 days ago.", }, "end": { "type": "string", "description": "End date (YYYY-MM-DD format). Defaults to today.", }, "models": { "type": "array", "items": {"type": "string"}, "description": "Filter by specific model IDs/aliases (optional)", }, }, "required": [], }, ),
  • Registration of the 'get_usage' handler in the TOOL_HANDLERS dictionary, which maps tool names to their handler functions in the MCP server.
    TOOL_HANDLERS = { # Utility tools (no queue needed) "list_models": handle_list_models, "recommend_model": handle_recommend_model, "get_pricing": handle_get_pricing, "get_usage": handle_get_usage, "upload_file": handle_upload_file, # Image generation tools "generate_image": handle_generate_image, "generate_image_structured": handle_generate_image_structured, "generate_image_from_image": handle_generate_image_from_image, # Image editing tools "remove_background": handle_remove_background, "upscale_image": handle_upscale_image, "edit_image": handle_edit_image, "inpaint_image": handle_inpaint_image, "resize_image": handle_resize_image, "compose_images": handle_compose_images, # Video tools "generate_video": handle_generate_video, "generate_video_from_image": handle_generate_video_from_image, "generate_video_from_video": handle_generate_video_from_video, # Audio tools "generate_music": handle_generate_music, }
  • Helper method in ModelRegistry that performs the actual HTTP request to the Fal.ai API to retrieve usage data.
    async def get_usage( self, start: Optional[str] = None, end: Optional[str] = None, endpoint_ids: Optional[List[str]] = None, ) -> Dict[str, Any]: """ Fetch usage and spending history. Args: start: Start date (YYYY-MM-DD format) end: End date (YYYY-MM-DD format) endpoint_ids: Optional list of endpoint IDs to filter by Returns: Dict with "time_series" and "summary" usage data Raises: httpx.HTTPStatusError: If API request fails (e.g., 401 for non-admin key) """ client = await self._get_http_client() # Build query params params: Dict[str, Any] = {"expand": "summary"} if start: params["start"] = start if end: params["end"] = end # Add endpoint_id filters if specified if endpoint_ids: # For multiple endpoint IDs, we need to make the request with repeated params # httpx supports this with a list of tuples param_tuples: List[Tuple[str, Union[str, int, float, bool, None]]] = [ ("expand", "summary") ] if start: param_tuples.append(("start", start)) if end: param_tuples.append(("end", end)) for eid in endpoint_ids: param_tuples.append(("endpoint_id", eid)) response = await client.get("/models/usage", params=param_tuples) else: response = await client.get("/models/usage", params=params) response.raise_for_status() result: Dict[str, Any] = response.json() return result

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raveenb/fal-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server