README.md•1.22 kB
---
description: >-
Guardrails is an open-source Python framework for adding programmable
input/output validators to LLM applications, ensuring safe, structured, and
compliant model interactions
---
# Guardrails AI
<table data-card-size="large" data-view="cards"><thead><tr><th></th><th data-hidden data-card-cover data-type="files"></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><a href="guardrails-ai-tracing.md"><strong>Guardrails AI Tracing</strong></a></td><td><a href="../../.gitbook/assets/guardrails.avif">guardrails.avif</a></td><td><a href="guardrails-ai-tracing.md">guardrails-ai-tracing.md</a></td></tr></tbody></table>
### Featured Tutorials 
<table data-view="cards"><thead><tr><th></th><th data-hidden data-card-cover data-type="files"></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><a href="https://www.youtube.com/watch?v=gXdBwgVZ_Ho">Combining Guardrails & Phoenix Walkthrough</a></td><td><a href="../../.gitbook/assets/Tutorials.jpg">Tutorials.jpg</a></td><td><a href="https://www.youtube.com/watch?v=gXdBwgVZ_Ho">https://www.youtube.com/watch?v=gXdBwgVZ_Ho</a></td></tr></tbody></table>