cost_guard_check
Check if an AI model call stays within budget limits by analyzing estimated token usage before execution.
Instructions
Pre-check if a model call is within budget. Returns safe/blocked status.
Args: model: Model identifier (e.g. "anthropic/claude-haiku-4-5-20251001", "openai/gpt-4o"). estimated_input_tokens: Expected input token count. estimated_output_tokens: Expected output token count.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | ||
| estimated_input_tokens | No | ||
| estimated_output_tokens | No |
Implementation Reference
- src/agent_safety_mcp/server.py:131-162 (handler)The cost_guard_check function is decorated with @mcp.tool() and acts as the handler to pre-check if a model call is within the configured budget using the ai_cost_guard library.
def cost_guard_check( model: str, estimated_input_tokens: int = 1000, estimated_output_tokens: int = 500, ) -> dict: """Pre-check if a model call is within budget. Returns safe/blocked status. Args: model: Model identifier (e.g. "anthropic/claude-haiku-4-5-20251001", "openai/gpt-4o"). estimated_input_tokens: Expected input token count. estimated_output_tokens: Expected output token count. """ guard = _get_guard() try: guard.check_budget(model, estimated_input_tokens, estimated_output_tokens) status = guard.status() return { "allowed": True, "model": model, "estimated_cost_usd": round( estimated_input_tokens * PROVIDERS.get(model, {}).get("input", 0) + estimated_output_tokens * PROVIDERS.get(model, {}).get("output", 0), 6, ), "remaining_usd": status.get("remaining_usd"), } except BudgetExceededError as e: return { "allowed": False, "model": model, "reason": str(e), }