get_framework_info
Retrieve details about OpenXAI framework including overview, features, research paper, installation, and quickstart guidance for evaluating AI explanation methods.
Instructions
Get information about OpenXAI framework
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| info_type | No | Type of information to retrieve |
Implementation Reference
- index.js:847-1001 (handler)The handler function that executes the get_framework_info tool. It takes an infoType parameter and returns the corresponding detailed text information about the OpenXAI framework (overview, features, paper, installation, or quickstart) wrapped in MCP content format.async getFrameworkInfo(infoType) { const info = { overview: `OpenXAI Framework Overview OpenXAI is a comprehensive and extensible open-source framework for evaluating and benchmarking post hoc explanation methods. It provides: š **Evaluation Framework**: Systematic evaluation of explanation methods with 22+ quantitative metrics š **Datasets**: Collection of synthetic and real-world datasets with ground truth explanations š¤ **Models**: Pre-trained models for various machine learning tasks š¬ **Explainers**: Implementations of state-of-the-art explanation methods (LIME, SHAP, etc.) š **Leaderboards**: Public XAI leaderboards for transparent benchmarking š ļø **Extensibility**: Easy integration of custom datasets, models, and explanation methods Key Features: - Model-agnostic explanation methods - Ground truth faithfulness metrics - Predicted faithfulness metrics - Stability and robustness evaluation - Fairness assessment across subgroups - Comprehensive benchmarking pipeline`, features: `OpenXAI Key Features šÆ **Explanation Methods**: - LIME (Local Interpretable Model-agnostic Explanations) - SHAP (SHapley Additive exPlanations) - Integrated Gradients - Grad-CAM - Guided Backpropagation - And more... š **Evaluation Metrics**: - Faithfulness: PGI, PGU - Stability: RIS, RRS, ROS - Ground Truth: FA, RA, SA, SRA, RC, PRA - Fairness: Subgroup analysis šļø **Datasets**: - Synthetic datasets with ground truth - Real-world datasets (German Credit, COMPAS, Adult Income) - Tabular, image, and text data support š¤ **Models**: - Neural Networks (ANN) - Logistic Regression - Random Forest - Support Vector Machine - XGBoost š **Leaderboards**: - Public benchmarking platform - Transparent evaluation results - Community-driven improvements`, paper: `OpenXAI Research Paper Title: "OpenXAI: Towards a Transparent Evaluation of Model Explanations" Authors: Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju Abstract: While several types of post hoc explanation methods have been proposed in recent literature, there is little to no work on systematically benchmarking these methods in an efficient and transparent manner. OpenXAI introduces a comprehensive framework for evaluating and benchmarking post hoc explanation methods with synthetic data generators, real-world datasets, pre-trained models, and quantitative metrics. š Paper: https://arxiv.org/abs/2206.11104 š Website: https://open-xai.github.io/ š GitHub: https://github.com/AI4LIFE-GROUP/OpenXAI Citation: @inproceedings{agarwal2022openxai, title={OpenXAI: Towards a Transparent Evaluation of Model Explanations}, author={Agarwal, Chirag and Krishna, Satyapriya and Saxena, Eshika and others}, booktitle={NeurIPS 2022 Datasets and Benchmarks Track}, year={2022} }`, installation: `OpenXAI Installation Guide š¦ **Installation**: \`\`\`bash # Install from PyPI pip install openxai # Or install from source git clone https://github.com/AI4LIFE-GROUP/OpenXAI.git cd OpenXAI pip install -e . \`\`\` š **Requirements**: - Python 3.7+ - PyTorch or TensorFlow (for neural network models) - scikit-learn - pandas - numpy - matplotlib š§ **Optional Dependencies**: - For image explanations: Pillow, opencv-python - For text explanations: transformers, torch-text - For advanced visualizations: plotly, seaborn ā **Verification**: \`\`\`python import openxai print(openxai.__version__) \`\`\``, quickstart: `OpenXAI Quickstart Guide š **Quick Start Example**: \`\`\`python from openxai.dataloader import ReturnLoaders from openxai import LoadModel, Explainer, Evaluator # 1. Load dataset trainloader, testloader = ReturnLoaders(data_name='german', download=True) # 2. Load pre-trained model model = LoadModel(data_name='german', ml_model='ann', pretrained=True) # 3. Generate explanations explainer = Explainer(method='lime', model=model) inputs, labels = next(iter(testloader)) explanations = explainer.get_explanations(inputs) # 4. Evaluate explanations evaluator = Evaluator(model, metric='PGI') score = evaluator.evaluate(inputs=inputs, labels=labels, explanations=explanations) print(f"PGI Score: {score}") \`\`\` šÆ **Common Workflows**: 1. **Benchmarking**: Compare multiple explanation methods 2. **Evaluation**: Assess explanation quality with metrics 3. **Leaderboards**: Submit results to public benchmarks 4. **Research**: Develop new explanation methods š **Next Steps**: - Explore different datasets and models - Try various explanation methods - Evaluate with different metrics - Contribute to leaderboards` }; return { content: [ { type: 'text', text: info[infoType] || info.overview } ] }; }
- index.js:220-229 (schema)The input schema for the get_framework_info tool, defining an optional 'info_type' parameter with allowed enum values.inputSchema: { type: 'object', properties: { info_type: { type: 'string', description: 'Type of information to retrieve', enum: ['overview', 'features', 'paper', 'installation', 'quickstart'] } }, required: []
- index.js:217-231 (registration)Registration of the get_framework_info tool in the ListTools handler response, including name, description, and input schema.{ name: 'get_framework_info', description: 'Get information about OpenXAI framework', inputSchema: { type: 'object', properties: { info_type: { type: 'string', description: 'Type of information to retrieve', enum: ['overview', 'features', 'paper', 'installation', 'quickstart'] } }, required: [] } },