list_explainers
Discover available AI explanation methods like LIME, SHAP, and Grad-CAM to understand model predictions. Filter by method type to find suitable interpretability tools.
Instructions
List available explanation methods in OpenXAI
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| method_type | No | Filter by explanation method type |
Implementation Reference
- index.js:564-625 (handler)The main execution function for the list_explainers tool. Filters and returns available explanation methods (LIME, SHAP, etc.) with their details based on the method_type parameter.async listExplainers(methodType) { const explainers = { lime: { name: 'LIME (Local Interpretable Model-agnostic Explanations)', description: 'Local explanations by approximating the model locally with an interpretable model', supported_data_types: ['tabular', 'image', 'text'], explanation_type: 'local', model_agnostic: true }, shap: { name: 'SHAP (SHapley Additive exPlanations)', description: 'Feature attribution based on cooperative game theory', supported_data_types: ['tabular', 'image', 'text'], explanation_type: 'local', model_agnostic: true }, integrated_gradients: { name: 'Integrated Gradients', description: 'Attribution method based on gradients integrated along a path', supported_data_types: ['tabular', 'image', 'text'], explanation_type: 'local', model_agnostic: false, requires: 'PyTorch or TensorFlow model' }, gradcam: { name: 'Grad-CAM (Gradient-weighted Class Activation Mapping)', description: 'Visual explanations for CNN models using gradients', supported_data_types: ['image'], explanation_type: 'local', model_agnostic: false, requires: 'CNN model' }, guided_backprop: { name: 'Guided Backpropagation', description: 'Modified backpropagation for generating visual explanations', supported_data_types: ['image'], explanation_type: 'local', model_agnostic: false, requires: 'Neural network model' } }; let result = []; if (methodType === 'all') { result = Object.entries(explainers).map(([key, value]) => ({ method: key, ...value })); } else { result = explainers[methodType] ? [{ method: methodType, ...explainers[methodType] }] : []; } return { content: [ { type: 'text', text: `Available OpenXAI explanation methods:\n\n` + JSON.stringify(result, null, 2) } ] }; }
- index.js:118-128 (schema)JSON schema defining the input parameters for the list_explainers tool, including the optional method_type filter.inputSchema: { type: 'object', properties: { method_type: { type: 'string', description: 'Filter by explanation method type', enum: ['lime', 'shap', 'integrated_gradients', 'gradcam', 'all'] } }, required: [] }
- index.js:115-129 (registration)Registration of the list_explainers tool in the ListToolsRequestHandler response, providing name, description, and schema.{ name: 'list_explainers', description: 'List available explanation methods in OpenXAI', inputSchema: { type: 'object', properties: { method_type: { type: 'string', description: 'Filter by explanation method type', enum: ['lime', 'shap', 'integrated_gradients', 'gradcam', 'all'] } }, required: [] } },
- index.js:267-268 (registration)Dispatch registration in the CallToolRequestHandler switch statement, calling the listExplainers handler with parameters.case 'list_explainers': return await this.listExplainers(args.method_type || 'all');