guardrails_detect
Identify sensitive content in text by configuring detectors for injection attacks, PII, NSFW, toxicity, and policy violations. Provides safety assessments for compliance and risk mitigation.
Instructions
Detect sensitive content using Guardrails.
Args: ctx: The context object containing the request context. text: The text to detect sensitive content in. detectors_config: Dictionary of detector configurations. Each key should be the name of a detector, and the value should be a dictionary of settings for that detector. Available detectors and their configurations are as follows:
Returns: A dictionary containing the detection results with safety assessments.
Input Schema
Name | Required | Description | Default |
---|---|---|---|
detectors_config | Yes | ||
text | Yes |