Demonstrates integration patterns for revenue intelligence systems with CRM platforms, providing ML-powered lead scoring, churn detection, and conversion insights for sales and customer success teams.
Demonstrates integration patterns for revenue intelligence systems with CRM platforms, providing ML-powered lead scoring, churn detection, and conversion insights for sales and customer success teams.
Revenue Intelligence MCP Server
A production-ready MCP server demonstrating ML system integration patterns for customer-facing business teams at scale. This server simulates a real-world ML-powered revenue intelligence platform, showcasing how to build observable, maintainable ML systems integrated with business workflows.
Business Context
Modern revenue teams (Sales, Customer Success, Marketing) need real-time ML insights to prioritize leads, prevent churn, and maximize conversions. This server demonstrates how to build production ML systems that:
Integrate with business workflows via MCP resources, tools, and prompts
Provide explainable predictions with feature attribution
Enable monitoring and observability through prediction logging
Support production ML patterns like versioning, drift detection, and health checks
This is the type of system you'd find powering revenue operations at high-growth SaaS companies, integrated with tools like Salesforce, HubSpot, or custom CRMs.
Architecture Overview
Key Components:
MCP Server (
server.py) - Exposes resources, tools, and prompts via MCP protocolScoring Engine (
scoring.py) - ML prediction logic with feature attributionData Store (
data_store.py) - In-memory data access layer (simulates DB/warehouse)Configuration (
config.py) - Model parameters, thresholds, feature weightsMock Data (
mock_data.py) - 20 accounts, 30 leads with realistic signals
Production ML Patterns Demonstrated
This server showcases essential production ML engineering patterns:
1. Model Versioning & Metadata Tracking
Explicit model version (
v1.2.3) stamped on every predictionTraining date and performance metrics tracked
Feature importance documented and accessible via MCP resource
2. Prediction Logging for Monitoring
Every prediction logged with full input/output metadata
Enables audit trails, debugging, and performance analysis
Foundation for drift detection and model retraining pipelines
3. Feature Attribution for Explainability
Each prediction includes feature-level attributions
Shows which signals drove the score (e.g., "demo requested" contributed 20%)
Critical for revenue team trust and regulatory compliance
4. Drift Detection Framework
Health check tool monitors prediction volume and distribution
Alerts when patterns deviate from training baseline
Enables proactive model retraining before degradation
5. Integration with Business Systems
Resources expose CRM data (accounts, leads) via standard URIs
Tools map to revenue team workflows (score lead, detect churn)
Prompts provide templates for common analysis tasks
6. Health Monitoring and SLOs
check_model_healthtool provides real-time system statusTracks uptime, prediction volume, accuracy, drift status
Foundation for SLA monitoring and incident response
7. Structured Error Handling
Comprehensive logging with structured context
Graceful degradation for missing data
Clear error messages for troubleshooting
Installation
Prerequisites
Python 3.10+
pip or uv for package management
Setup
Usage
Running the Server
With Claude Desktop
Add to your Claude Desktop config (claude_desktop_config.json):
Restart Claude Desktop and the server will be available.
Standalone Testing
Available Resources
Access CRM data and model metadata:
crm://accounts/{account_id}- Get account detailsExample: crm://accounts/acc_001 Returns: Account data with usage signals, MRR, plan tiercrm://accounts/list- List all accountsReturns: Array of all 20 sample accountscrm://leads/{lead_id}- Get lead detailsExample: crm://leads/lead_001 Returns: Lead data with engagement signals, company infomodels://lead_scorer/metadata- Model metadataReturns: Version, training date, performance metrics, feature importance, drift status
Available Tools
Execute ML predictions and monitoring:
1. score_lead
Score a lead based on company attributes and engagement signals.
Returns: Score (0-100), tier (hot/warm/cold), feature attributions, explanation
2. get_conversion_insights
Predict trial-to-paid conversion probability.
Returns: Conversion probability, engagement signals, recommended actions
3. detect_churn_risk
Analyze account health and identify churn risk.
Returns: Risk score, risk tier, declining signals, intervention suggestions
4. check_model_health
Monitor ML system health and performance.
Returns: Model version, uptime, prediction count, drift status, accuracy
5. log_prediction
Manually log a prediction for monitoring.
Returns: Log ID, timestamp, success status
Available Prompts
Pre-built templates for common workflows:
analyze-account-expansion- CS team upsell analysisArgument:
account_idUse case: Assess account readiness for tier upgrade
weekly-lead-report- Sales leadership pipeline reportArgument:
week_number(optional)Use case: Weekly lead quality and velocity analysis
explain-low-score- Lead score explanationArgument:
lead_idUse case: Understand why a lead scored poorly and how to improve
Example Prompts to Try
Once connected to Claude Desktop, try these:
Lead Scoring
"Score this lead for me: Acme Corp, technology industry, 500 employees. They've visited our site 50 times, requested a demo, downloaded 3 whitepapers, have an email engagement score of 90, engaged on LinkedIn, and started a free trial."
Churn Detection
"Check the churn risk for account acc_006"
Conversion Analysis
"What's the conversion probability for trial account acc_002? What should we do to increase it?"
Model Health
"Check the health of the lead scoring model"
Data Exploration
"Show me all the trial accounts and analyze which ones are most likely to convert"
Structured Analysis
"Use the analyze-account-expansion prompt for account acc_001"
Testing
Run the comprehensive test suite:
Test Coverage:
✅ Lead scoring (hot/warm/cold tiers)
✅ Churn risk detection
✅ Conversion probability calculation
✅ Feature attribution generation
✅ Prediction logging
✅ Data access layer
✅ Edge cases (missing data, invalid inputs)
✅ Mock data integrity
Project Structure
Configuration
Key configuration in config.py:
Model Version:
v1.2.3Lead Tier Thresholds: Hot (≥70), Warm (40-70), Cold (<40)
Feature Weights: Company size (20%), Engagement (40%), Industry (20%), Intent (20%)
Industry Fit Scores: Technology (90), SaaS (85), Finance (80), etc.
Churn Risk Thresholds: Critical (≥70), High (50-70), Medium (30-50), Low (<30)
Sample Data
20 Accounts across industries:
3 trial accounts (exploring product)
3 at-risk accounts (declining usage)
14 active accounts (various tiers: starter, professional, enterprise)
30 Leads with varying quality:
Hot leads: High engagement, demo requested, enterprise size
Warm leads: Moderate engagement, mid-market
Cold leads: Low engagement, small companies
Production Deployment Notes
This demo uses in-memory storage. For production deployment:
Data Layer
Replace
mock_data.pywith connections to:Snowflake/BigQuery for historical data and feature store
PostgreSQL/MySQL for operational CRM data
Redis for real-time feature caching
Model Serving
Deploy scoring logic as:
FastAPI/Flask service for REST API
AWS Lambda/Cloud Functions for serverless
SageMaker/Vertex AI for managed ML serving
Monitoring
Implement production monitoring:
Datadog/New Relic for application metrics
MLflow/Weights & Biases for ML experiment tracking
Grafana/Kibana for prediction drift dashboards
PagerDuty for alert routing
MLOps Pipeline
Establish model lifecycle management:
Feature pipelines (dbt, Airflow) for data freshness
Training pipelines with version control (Git, DVC)
A/B testing framework for model evaluation
Automated retraining based on drift detection
Shadow deployments for validation before rollout
Data Quality
Add comprehensive data validation:
Great Expectations for input data quality checks
Schema evolution handling with Pydantic
Feature drift monitoring against training distributions
Security & Compliance
Implement security controls:
Authentication/authorization for API access
PII handling and data anonymization
Audit logging for regulatory compliance (GDPR, SOC2)
Rate limiting and DDoS protection
License
MIT License - feel free to use as a template for your own ML systems.
Contributing
This is a demonstration project. For production use, adapt the patterns to your specific:
Data infrastructure (warehouse, feature store, CRM)
ML frameworks (scikit-learn, XGBoost, PyTorch)
Deployment environment (cloud provider, Kubernetes, serverless)
Monitoring and observability stack
Built with: Python 3.10+ | MCP SDK | Type hints | Structured logging | pytest
Demonstrates: Production ML patterns | Business integration | Observability | Explainability
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Provides ML-powered revenue intelligence for sales and customer success teams, enabling lead scoring, churn risk detection, and conversion predictions with explainable feature attribution and production monitoring capabilities.