prompt_shield
Detect prompt injection attempts, jailbreak techniques, and PII in LLM inputs to enhance security and prevent unauthorized access.
Instructions
Prompt injection / jailbreak / PII detection for LLM inputs
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| params | No | Free-form params object — passed as query string for GET, JSON body for POST |