Skip to main content
Glama

controlled_action

Evaluate an action against runtime gate policies, execute if allowed, or initiate human approval workflow with optional spending limits.

Instructions

Run the high-level AVP controlled-action flow.

Returns `ControlledActionOutcome.to_dict()` as a JSON string. Possible
statuses are `executed`, `approval_required`, and `blocked`. Human approval
is never auto-approved; after approval, call `execute_after_approval`.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actionYesAction name to evaluate and execute only if Runtime Gate allows it
resourceYesTarget resource identifier bound into the Runtime Gate decision
environmentYesExecution environment. Examples: development, staging, production
delegation_receiptYesDelegationReceipt JSON object string issued by the workflow owner/principal
paramsNoOptional execution params as a JSON object string. Default: {}{}
amountNoOptional monetary amount for spend-sensitive actions. Omit when not applicable
currencyNoOptional ISO currency code for spend-sensitive actions. Example: USD
approval_expires_in_secondsNoApproval TTL in seconds when Runtime Gate returns WAITING_FOR_HUMAN_APPROVAL

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description covers return format (JSON string), possible statuses, and the need for a follow-up call. Doesn't address rate limits or auth, but sufficient for basic behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with purpose, concise and no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 params, 100% schema coverage, and existence of output schema, the description explains the flow, statuses, and follow-up. Still some room for more detail on return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no extra parameter details beyond the schema, but provides overall context for delegation_receipt and params.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it runs the 'controlled-action flow,' specifies the return format and possible statuses, and distinguishes from siblings like execute_after_approval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: human approval is never auto-approved, and after approval call execute_after_approval. Provides clear context for when to use this tool versus its sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentveil-protocol/avp-sdk'

If you have feedback or need assistance with the MCP directory API, please join our Discord server