add_guardrails_policy
Add a guardrails policy to enforce AI safety measures by configuring detectors for injection attacks, PII, NSFW content, toxicity, bias, and more. Define custom settings for each detector to ensure compliance with specified security and content guidelines.
Instructions
Add a new guardrails policy.
Args: policy_name: The name of the policy to add. detectors: detectors_config: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows:
Returns: A dictionary containing the response message and policy details.
Input Schema
Name | Required | Description | Default |
---|---|---|---|
detectors | Yes | ||
policy_description | Yes | ||
policy_name | Yes |