HistGradientBoostingClassifier MCP Server
Provides tools for interacting with scikit-learn's HistGradientBoostingClassifier, allowing for model creation, training, evaluation, and class predictions directly through MCP tools.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@HistGradientBoostingClassifier MCP Servertrain a classifier named 'churn_model' on my dataset and show the accuracy score"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
HistGradientBoostingClassifier MCP Server
A Model Context Protocol (MCP) server that provides tools for training, predicting, and managing sklearn's HistGradientBoostingClassifier models.
Features
This MCP server exposes the following tools:
create_classifier: Create a new HistGradientBoostingClassifier with custom parameters
train_model: Train a classifier on provided data
predict: Make class predictions on new data
predict_proba: Get class probabilities for predictions
score_model: Evaluate model accuracy on test data
get_model_info: Get detailed information about a model
list_models: List all available models
delete_model: Remove a model from memory
save_model: Serialize a model to base64 string
load_model: Load a model from serialized string
Installation
pip install -r requirements.txtLocal Development
Run the server locally:
uvrun --with mcp server.pyThe server will start on http://localhost:8000 by default.
Railway Deployment
Prerequisites
A Railway account (sign up at https://railway.app)
Railway CLI installed (optional, can use web interface)
Git repository with your code (GitHub, GitLab, or Bitbucket)
Deploy via Railway Web Interface
Go to https://railway.app and create a new project
Click "New Project" → "Deploy from GitHub repo"
Select your repository containing this MCP server
Railway will automatically detect the Python project and use the
ProcfileThe server will be deployed and you'll get a public URL (e.g.,
https://your-app.railway.app)
Deploy via Railway CLI
# Install Railway CLI
npm i -g @railway/cli
# Login to Railway
railway login
# Initialize project (in your project directory)
railway init
# Link to existing project or create new one
railway link
# Deploy
railway upEnvironment Variables
No environment variables are required for basic operation. Railway automatically provides:
PORT: The port your application should listen onThe server automatically binds to
0.0.0.0to accept external connections
Verifying Deployment
Once deployed, your MCP server will be available at your Railway URL. You can test it by:
Visiting
https://your-app.railway.appin a browser (should show MCP server info or 404, which is normal)Using the MCP Inspector:
npx -y @modelcontextprotocol/inspectorand connecting to your Railway URLConnecting from an MCP client using the streamable-http transport
Current Deployment URL: https://web-production-a620a.up.railway.app
Usage
Once deployed, the MCP server will be accessible at your Railway URL. You can connect to it using any MCP-compatible client.
Example: Using with Claude Desktop
Add to your Claude Desktop MCP configuration (~/Library/Application Support/Claude/claude_desktop_config.json on Mac):
{
"mcpServers": {
"histgradientboosting": {
"url": "https://web-production-a620a.up.railway.app",
"transport": "streamable-http"
}
}
}Example API Calls
The server exposes tools that can be called via MCP protocol. Here's what each tool does:
Create a classifier:
create_classifier(
model_id="my_model",
learning_rate=0.1,
max_iter=100,
max_leaf_nodes=31
)Train the model:
train_model(
model_id="my_model",
X=[[1, 2], [3, 4], [5, 6]],
y=[0, 1, 0]
)Make predictions:
predict(
model_id="my_model",
X=[[2, 3], [4, 5]]
)Get probabilities:
predict_proba(
model_id="my_model",
X=[[2, 3], [4, 5]]
)Model Storage
Currently, models are stored in-memory. This means:
Models persist only during the server's lifetime
Restarting the server will lose all models
For production use, consider implementing persistent storage (database, file system, or cloud storage)
API Reference
HistGradientBoostingClassifier Parameters
All standard sklearn HistGradientBoostingClassifier parameters are supported:
loss: Loss function (default: 'log_loss')learning_rate: Learning rate/shrinkage (default: 0.1)max_iter: Maximum boosting iterations (default: 100)max_leaf_nodes: Maximum leaves per tree (default: 31)max_depth: Maximum tree depth (default: None)min_samples_leaf: Minimum samples per leaf (default: 20)l2_regularization: L2 regularization (default: 0.0)max_features: Feature subsampling proportion (default: 1.0)max_bins: Maximum histogram bins (default: 255)early_stopping: Enable early stopping (default: 'auto')validation_fraction: Validation set fraction (default: 0.1)n_iter_no_change: Early stopping patience (default: 10)random_state: Random seed (default: None)verbose: Verbosity level (default: 0)
See the sklearn documentation for detailed parameter descriptions.
License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/francoisgoupil/MCP3'
If you have feedback or need assistance with the MCP directory API, please join our Discord server