evaluate_explanation
Assess AI explanation quality with OpenXAI metrics to validate accuracy and reliability for machine learning models. Supports PGI, PGU, RIS, and more.
Instructions
Evaluate explanation quality using OpenXAI metrics
Input Schema
Name | Required | Description | Default |
---|---|---|---|
explanation | Yes | JSON string of the explanation to evaluate | |
metric | Yes | Evaluation metric to use (PGI, PGU, RIS, RRS, ROS, etc.) | |
model_info | Yes | Information about the model |