Provides deployment options for the OpenXAI framework on AWS EC2 (Hong Kong) with cost estimates, setup guidance, and monitoring capabilities
Integrates with arXiv for accessing the OpenXAI research paper and related explainable AI academic resources
Supports curl commands for deploying models to Xnode and interacting with deployment APIs
Enables access to OpenXAI repositories and provides contribution workflows through GitHub
Offers deployment options for OpenXAI on Google Cloud Platform with NYC region support, pricing information, and configuration tools
Provides a similar user experience to Hugging Face for model access and deployment workflows
Integrates with Meta's Code Llama models for code generation capabilities within OpenXAI Studio
Offers deployment options for OpenXAI on Vultr's cloud platform with Washington DC region support and cryptocurrency payment options
Enables Web3 wallet connections for authentication and decentralized deployments through the OpenXAI Studio platform
OpenXAI MCP Server
A Model Context Protocol (MCP) server for OpenXAI, providing comprehensive tools for evaluating and benchmarking post hoc explanation methods in AI models.
Overview
OpenXAI is a general-purpose lightweight library that provides a comprehensive list of functions to systematically evaluate the reliability of post hoc explanation methods. This MCP server exposes OpenXAI's functionality through a standard interface that can be used with AI assistants and other MCP-compatible applications.
Features
🔍 Explanation Methods
- LIME (Local Interpretable Model-agnostic Explanations)
- SHAP (SHapley Additive exPlanations)
- Integrated Gradients
- Grad-CAM
- Guided Backpropagation
📊 Evaluation Metrics
- Faithfulness: PGI, PGU
- Stability: RIS, RRS, ROS
- Ground Truth: FA, RA, SA, SRA, RC, PRA
- Fairness: Subgroup analysis
🗂️ Datasets
- Synthetic datasets with ground truth explanations
- Real-world datasets (German Credit, COMPAS, Adult Income)
- Support for tabular, image, and text data
🤖 Pre-trained Models
- Neural Networks (ANN)
- Logistic Regression
- Random Forest
- Support Vector Machine
- XGBoost
🏆 Leaderboards
- Access to public XAI benchmarking results
- Transparent evaluation and comparison
Installation
Prerequisites
- Node.js 18+
- npm or yarn
- Python 3.7+ (for OpenXAI functionality)
Install the MCP Server
Configure with Cursor
Add the following to your Cursor settings (~/.cursor/mcp.json
):
Available Tools
1. Dataset Management
list_datasets
List available datasets in the OpenXAI framework.
Parameters:
category
(optional): Filter by dataset category (synthetic
,real-world
,tabular
,image
,text
,all
)
Example:
load_dataset
Load a specific dataset from OpenXAI.
Parameters:
dataset_name
: Name of the dataset (e.g.,german
,compas
,adult
)download
(optional): Whether to download if not available locally
Example:
2. Model Management
list_models
List available pre-trained models in OpenXAI.
Parameters:
dataset_name
(optional): Filter models by datasetmodel_type
(optional): Filter by model type (ann
,lr
,rf
,svm
,xgb
,all
)
Example:
load_model
Load a pre-trained model from OpenXAI.
Parameters:
data_name
: Name of the dataset the model was trained onml_model
: Type of ML model (ann
,lr
,rf
,svm
,xgb
)pretrained
(optional): Whether to load pretrained model
Example:
3. Explanation Methods
list_explainers
List available explanation methods in OpenXAI.
Parameters:
method_type
(optional): Filter by method type (lime
,shap
,integrated_gradients
,gradcam
,all
)
Example:
generate_explanation
Generate explanations for model predictions.
Parameters:
method
: Explanation method (lime
,shap
,integrated_gradients
, etc.)data_sample
: JSON string of input data to explainmodel_info
: Model information object
Example:
4. Evaluation Metrics
list_metrics
List available evaluation metrics in OpenXAI.
Parameters:
metric_type
(optional): Filter by metric type (faithfulness
,stability
,fairness
,all
)
Example:
evaluate_explanation
Evaluate explanation quality using OpenXAI metrics.
Parameters:
metric
: Evaluation metric (PGI
,PGU
,RIS
, etc.)explanation
: JSON string of explanation to evaluatemodel_info
: Model information object
Example:
5. Leaderboards
get_leaderboard
Get leaderboard results for explanation methods.
Parameters:
dataset
(optional): Dataset namemetric
(optional): Metric to sort by
Example:
6. Framework Information
get_framework_info
Get information about the OpenXAI framework.
Parameters:
info_type
(optional): Type of information (overview
,features
,paper
,installation
,quickstart
)
Example:
Model Deployment Guide
🚀 Deployment Options
OpenXAI supports multiple deployment options to suit different needs and budgets:
1. Xnode (Recommended for Beginners)
- ✅ Decentralized: True decentralized deployment
- ✅ Web3 Ready: Built for blockchain integration
- ✅ No KYC: Quick setup without identity verification
- 💰 Cost: Free tier available
- 🔧 Setup: One-click deployment
Quick Start:
2. Xnode DVM (Advanced)
- ❌ Centralized: Traditional cloud deployment
- ✅ Web3 Ready: Crypto payment integration
- ✅ No KYC: Anonymous deployment
- 💰 Cost: 500 OPNX tokens
- 🔧 Performance: Higher compute resources
3. Vultr (Washington)
- ❌ Centralized: Traditional cloud provider
- ✅ Web3 Ready: Cryptocurrency payments accepted
- ✅ No KYC: Minimal verification required
- 💰 Cost: $655/month
- 🌍 Location: Washington DC, USA
4. AWS EC2 (Hong Kong)
- ❌ Centralized: Amazon Web Services
- ✅ Web3 Ready: Supports Web3 applications
- ✅ No KYC: Standard AWS verification
- 💰 Cost: $1,321/month
- 🌍 Location: Hong Kong
5. Google Cloud (NYC)
- ❌ Centralized: Google Cloud Platform
- ✅ Web3 Ready: Web3 compatible
- ✅ No KYC: Google account required
- 💰 Cost: $1,745/month
- 🌍 Location: New York City
6. Xnode One (Hardware) - Coming Soon
- ✅ Decentralized: Physical hardware nodes
- ✅ Web3 Ready: Native Web3 integration
- ✅ No KYC: Completely anonymous
- 💰 Cost: $0/month (hardware purchase required)
- 🔧 Control: Full hardware control
🔗 OpenXAI Studio Integration
Quick OpenXAI Studio Deployment
Deploy your models using OpenXAI Studio's decentralized platform:
Available Models in OpenXAI Studio
- DeepSeek R1 - Advanced reasoning model
- Code Llama - Meta's code generation model
- Gamma 2 - Google's latest model
- Llama 3.2 Vision - 90B parameter vision model
- Embedding Models - For text embeddings
- Code Models - Specialized for code generation
Deployment Process
🌐 Visit OpenXAI Studio App Store: https://studio.openxai.org/app-store
- Connect Wallet: Web3 wallet connection for decentralized access
- Browse App Store: Explore models in categories (General, Vision, Embedding, Code)
- Select Model: Choose from popular models:
- DeepSeek R1 (1.5b, 7b, 8b, 14b, 32b, 70b, 671b)
- Code Llama (7b, 13b, 34b, 70b)
- Qwen 2.5 (0.5b, 1.5b, 3b, 7b, 14b, 32b, 72b)
- Llama 3.2 Vision (11b, 90b)
- Gemma 2 (2b, 9b, 27b)
- And many more...
- Choose Parameters: Select model size based on your needs
- Select Deployment Type: Choose X node or other deployment options
- Deploy: Hit deployment button (2-5 minutes)
- Access Deployments: Go to
/deployments
section - Login: Use provided credentials to access your deployed model
🎯 Step-by-Step Deployment
Option 1: Interactive Deployment Wizard
Option 2: Manual Configuration
- Choose Your Provider
- Configure Model Settings
- Set Up Authentication
- Deploy and Test
🔐 Authentication & Access
User Login Flow
Similar to Hugging Face, users can easily access deployed models:
- Visit Your Model Interface
- Login Options
- Web3 Wallet: Connect with MetaMask, WalletConnect
- Traditional: Email/password or OAuth
- API Key: For programmatic access
- Model Access
- Interactive web interface
- API endpoints
- SDK integration
Quick Access Example
📊 Deployment Monitoring
Real-time Metrics
Monitor your deployed models:
Cost Optimization
🔄 Switching Between Deployments
Easily switch between different deployment providers:
Usage Examples
Basic Dataset and Model Loading
Explanation Generation Workflow
Benchmarking Comparison
Deployment Workflow
OpenXAI Studio Integration Guide
When a user wants to deploy a model, here's the complete process:
🚀 Quick Start Guide
- Visit the App Store: https://studio.openxai.org/app-store
- Connect Wallet: Click "Connect Wallet" button
- Browse Models: Explore categories:
- General: qwen, deepseek-r1, llama models
- Vision: llama-3.2-vision, qwen2-vl
- Embedding: text-embedding models
- Code: codelama, qwen2.5-coder
- Select Model: Click on your preferred model
- Choose Parameters: Select size (1.5b, 7b, 32b, etc.)
- Configure Deployment: Choose X node (decentralized) or other options
- Deploy: Click deploy button
- Access: Go to
/deployments
and use your credentials
🔧 Using This MCP
Our MCP helps you prepare for OpenXAI Studio deployment:
Development
Running the Server
Project Structure
OpenXAI Framework
This MCP server is built on top of the OpenXAI framework:
- Website: https://open-xai.github.io/
- GitHub: https://github.com/AI4LIFE-GROUP/OpenXAI
- Paper: https://arxiv.org/abs/2206.11104
Key OpenXAI Components
- Data Loaders: Load datasets with train/test splits
- Model Loading: Access pre-trained models
- Explainers: Generate explanations using various methods
- Evaluators: Assess explanation quality
- Leaderboards: Compare method performance
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Development Setup
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Run tests (
npm test
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Citation
If you use OpenXAI in your research, please cite:
Support
For issues and questions:
- Create an issue on GitHub
- Check the OpenXAI documentation
- Contact the OpenXAI team at openxaibench@gmail.com
Acknowledgments
- OpenXAI team for the excellent framework
- Model Context Protocol for the standard interface
- All contributors to the explainable AI community
Tools
Provides tools for evaluating and benchmarking AI explanation methods through a standard interface that can be used with AI assistants and MCP-compatible applications.
Related MCP Servers
- AsecurityFlicenseAqualityProvides reasoning content to MCP-enabled AI clients by interfacing with Deepseek's API or a local Ollama server, enabling focused reasoning and thought process visualization.Last updated -157657JavaScript
- AsecurityAlicenseAqualityAn MCP tool enabling structured thinking and analysis across multiple AI platforms through branch management, semantic analysis, and cognitive enhancement.Last updated -117TypeScriptMIT License
- AsecurityAlicenseAqualityProvides AI assistants with enhanced reasoning capabilities through structured thinking, persistent knowledge graph memory, and intelligent tool orchestration for complex problem-solving.Last updated -2033323TypeScriptMIT License
- -securityFlicense-qualityEnables AI assistants to search for documentation of packages and services, providing implementation details, examples, and specifications through a specialized API.Last updated -61JavaScript