The Optimizely DXP MCP Server enables AI assistants to manage Optimizely DXP deployments and environments through conversational commands. Key capabilities include:
• Project Management: Get project info, list projects, and manage configurations • Deployment Management: Start, monitor, complete, reset/rollback deployments; upload and analyze deployment packages • Package Operations: Prepare, split, and optimize large deployment packages; generate SAS URLs for direct uploads • Database Operations: Export databases (CMS, Commerce) and check export status • Content & Media: Copy content between environments, list storage containers, generate secure storage links, and sync media files • Monitoring & Analytics: View deployment monitors, system statistics, usage analytics, API rate limits, and cache status; clear cache and retrieve edge/CDN logs • Support: Access comprehensive support information and contact details
Provides access to Cloudflare edge/CDN logs for monitoring website performance and traffic analytics through Optimizely DXP's beta integration
Optimizely DXP MCP Server
🎉 What's New in v3.46
Major improvements since September 2024:
⚡ 3-10x Faster Operations: PowerShell fully removed, direct REST API with HMAC-SHA256 authentication
🔷 TypeScript Migration: Complete codebase conversion with strict mode compliance (479 errors fixed)
📊 Streaming Log Analysis: New
analyze_logs_streamingtool - 2x faster than download+analyze🔄 45 Tools: Expanded from 38 tools with unified naming and complete automation support
🤖 24 Tools with structuredContent: Perfect for automation platforms (n8n, Zapier, Make.com)
🔴 Redis Integration: Optional caching with circuit breaker and reconnection logic
📡 MCP Resources: Real-time deployment monitoring via event subscription
🔀 Dual Transport Modes: stdio for Claude Desktop, HTTP for automation platforms (n8n, Zapier, Docker)
🎯 Zero Dependencies: No PowerShell, no Python, no external tools - just npm install
Related MCP server: Context Engineering MCP Platform
🤔 The Problem
You invested in enterprise DXP, but you're only using 10% of its power.
You chose Optimizely to deliver exceptional digital experiences, but:
Your team spends more time on DevOps tasks than building features
Critical issues hide in logs until customers complain
Deployments are scary, manual, multi-step processes
Setting up dev environments takes hours or days
You can't move fast enough to beat competitors embracing AI
Meanwhile, AI is revolutionizing how software gets built and managed. Companies using AI-powered automation are shipping 10x faster, finding issues before customers do, and freeing their teams to focus on innovation.
✨ The Solution
Your infinite DevOps workforce - AI that never sleeps, never breaks, always delivers.
This MCP server transforms your Optimizely DXP into an AI-powered platform that goes far beyond replacing Google searches. Just as Optimizely Opal provides an infinite workforce for marketing, this MCP creates your infinite workforce for DXP operations:
AI Specialists that understand your infrastructure, deployments, and data
Intelligent Agents that handle complex multi-step workflows autonomously
24/7 Operations that scale infinitely without adding headcount
Your team elevated from operators to innovators
Finally get the ROI your DXP investment promised - ship faster, break less, sleep better.
🚀 The Transformation
From DXP operator to digital experience innovator:
Ship 10x faster - What took hours now takes seconds
Zero-downtime deployments - AI handles the complexity
Proactive issue resolution - Fix problems before customers notice
Instant dev environments - Full production replicas in minutes
Competitive advantage - Move at AI speed while others click through portals
Maximum DXP ROI - Finally use all those powerful features you're paying for
🔌 What is MCP?
Model Context Protocol (MCP) is the bridge between AI's intelligence and your DXP's capabilities.
While others use AI just to search documentation or write code snippets, MCP enables something revolutionary: AI that takes action. This isn't about better search results - it's about AI that can:
Execute complex operations autonomously
Orchestrate multi-step workflows across environments
Monitor systems and self-heal issues
Learn from your infrastructure to make smarter decisions
Scale infinitely without human bottlenecks
Think of it as evolving from "AI as advisor" to "AI as workforce" - the difference between asking for directions and having a chauffeur.
🌟 Key Features & Capabilities
Zero Dependencies Architecture
Direct REST API - No External Tools Required
HMAC-SHA256 authentication: Secure, standards-based API access
No PowerShell: Completely removed in v3.44 - never needed again
No Python: All JavaScript/TypeScript with Node.js runtime
Cross-platform: Identical behavior on macOS, Linux, and Windows
Dual transport modes: stdio for Claude Desktop, HTTP for automation platforms
Single install: Just
npm install- that's it!
Performance Improvements:
Deployment operations: 3-10x faster vs PowerShell
Database exports: 5x faster
Log downloads: 3x faster
AI-Powered Operations (45 Tools)
Comprehensive DXP Management:
1. Deployment Management
Autonomous deployment with health monitoring
Rollback and reset capabilities
Content sync between environments
Real-time progress tracking with ETAs
MCP Resources subscription for live events
2. Log Analysis & Intelligence
Streaming analysis (2x faster than download+analyze)
Compare logs tool for deployment decisions
AI agent detection and pattern recognition
Performance metrics and error analysis
Structured output for automation workflows
3. Database Operations
Interactive export workflow with smart monitoring
Automated backup downloads
Export status tracking
Background downloads with progress updates
4. Storage Management
Incremental blob downloads (only changed files)
Manifest tracking for efficiency
Pattern-based filtering (*.pdf, *.jpg, etc.)
5x faster with parallel downloads
5. Real-Time Monitoring
MCP Resources subscription for deployment events
Webhook notifications for external automation
Health checks and connection testing
Environment access verification
Enterprise-Ready Architecture
Built for Scale and Reliability:
Redis Integration (Optional)
Circuit breaker pattern for automatic fallback
Reconnection logic with exponential backoff
Caching layer for repeated queries
12 integration tests covering all scenarios
Rate Limiting & Retry
Automatic retry with exponential backoff
HTTP 429 (rate limit) handling
Respects Retry-After headers
Event System
MCP Resources for real-time updates
Webhook-ready for external integration
Event streaming without polling
Type Safety
Full TypeScript with strict mode
479 type errors fixed across codebase
Better IDE support and auto-completion
Automation Platform Support
Native Integration with Workflow Tools:
HTTP Transport Mode: Dual-mode operation (stdio for Claude, HTTP for automation)
24 Tools with structuredContent: Native MCP field for structured data
Direct Property Access: No JSON.parse() needed -
response.structuredContent.data.deploymentIdPlatform Support: n8n, Zapier, Make.com, custom workflows
Webhook-Ready: Event system for external automation
See N8N_INTEGRATION.md for automation platform setup.
🔀 Transport Modes
The MCP server supports two transport modes for different deployment scenarios:
stdio Mode (Default)
Best for: Claude Desktop, local AI clients, single-user development
Characteristics:
Process-to-process communication via stdin/stdout
No network ports required
Automatically started by Claude Desktop
Lowest latency and most secure (no network exposure)
Ideal for local development and desktop AI applications
Setup:
No additional configuration needed - stdio is the default mode.
HTTP Mode
Best for: n8n, Zapier, Make.com, Docker, remote access, multi-tenant platforms
Characteristics:
RESTful HTTP server with JSON-RPC 2.0
MCP endpoint:
POST /mcpHealth check:
GET /healthSupports concurrent remote connections
Production-ready with graceful shutdown
Setup:
Configuration:
Variable | Default | Description |
|
| Set to
to enable HTTP mode |
|
| HTTP server port (1-65535) |
|
| Bind address (
for local only,
for Docker/remote) |
Health Check:
Decision Guide
Scenario | Mode | Why |
Claude Desktop usage | stdio | Default, fastest, most secure |
n8n workflow automation | http | REST API, remote access |
Zapier/Make.com integration | http | Webhook support, structured data |
Docker deployment | http | Network connectivity, multiple clients |
Local development (single user) | stdio | Simplest setup, no ports needed |
Multi-tenant SaaS platform | http | Concurrent connections, load balancing |
Remote server deployment | http | Network accessibility required |
All 45 tools work identically in both modes - only the transport layer changes.
📋 Complete Tool Reference (45 Tools)
Permission & Access Management (4 tools)
test_connection- Validate setup and show capabilitiescheck_permissions- Detailed environment access breakdownverify_access- Confirm specific environment accesshealth_check- System status with structured health data
Deployments & Content Sync (10 tools)
list_deployments- Show deployment history with filtersstart_deployment- Initiate code deploymentmonitor_deployment- Real-time progress with auto-refreshcomplete_deployment- Finish verification statereset_deployment- Rollback if neededget_deployment_status- Current status with wait-then-check supportcopy_content- Sync content between environmentslist_content_copies- Show content copy history
Database Management (4 tools)
export_database- Interactive workflow with smart monitoringcheck_export_status- Progress tracking with auto-download flagdownload_database_export- Get export file with background progresslist_recent_exports- Export history and monitoring
Log Analysis & Downloads (6 tools)
analyze_logs_streaming- NEW: Stream and analyze in-memory (2x faster)compare_logs- NEW: Side-by-side comparison for deployment decisionsdownload_logs- Download with manifest tracking (incremental)list_log_containers- Show available log containersdiscover_logs- Find logs by date range and typecheck_download_status- Progress tracking for active downloads
Storage Management (5 tools)
list_storage_containers- Show blob containers with structured datadownload_blobs- Incremental downloads (only changed files, 5x faster with parallel)generate_storage_sas_link- Create temporary access URLslist_download_history- Show completed downloads with manifests
Multi-Project Management (3 tools)
list_projects- Show all configured projectsswitch_project- Change active project contextcurrent_project- Display active project info
Configuration & Utilities (6 tools)
get_ai_guidance- Context-aware best practicesget_version- Version info with update checksget_download_paths- Show download configurationset_download_path- Configure paths by typelist_active_downloads- Progress for all background downloadscancel_download- Stop background download
Advanced Features (7 tools)
get_rate_limit_status- Show API quota and limitsget_cache_status- Redis cache statistics (if enabled)monitor_project_upgrades- Track DXP CMS version updatesenable_http_logs- Configure HTTP log streamingdisable_http_logs- Disable HTTP log streamingget_tool_availability- Show which tools work in current contextsubscribe_deployment_events- NEW: MCP Resources for real-time updates
Total: 45 tools organized in 8 categories
🎯 Performance Benchmarks
REST API vs PowerShell (v3.44+)
Operation | PowerShell | REST API | Improvement |
Start Deployment | 8-12s | 1-2s | 5-10x faster |
Database Export | 10-15s | 2-3s | 5x faster |
Log Download | 6-9s | 2-3s | 3x faster |
Environment List | 4-6s | 0.5-1s | 6-8x faster |
Streaming vs Download+Analyze
Operation | Download+Analyze | Streaming | Improvement |
Last Hour Logs | 30-45s | 15-20s | 2x faster |
Memory Usage | High (full download) | Low (streaming) | 4-6x less |
Disk I/O | Heavy (write + read) | None (memory only) | Eliminated |
Automation Ready | Post-processing needed | Structured output | Immediate |
Parallel Downloads
Files | Sequential | Parallel | Improvement |
100 blobs | 250s | 50s | 5x faster |
500 blobs | 1250s | 260s | 5x faster |
Log archives | 180s | 45s | 4x faster |
⚠️ IMPORTANT: No Manual Startup Required
DO NOT run
❌ What NOT to Do
DO NOT run - The MCP is not a standalone server
DO NOT run - Claude handles execution automatically
DO NOT keep a terminal window open - The MCP runs on-demand
DO NOT look for a running process - It starts and stops as needed
✅ How MCP Actually Works
Claude automatically starts the MCP when you open a conversation
The MCP runs as a subprocess managed entirely by Claude
It starts and stops automatically based on your usage
No manual intervention required - just use Claude normally
🎯 Correct Installation & Usage
That's it! Configure once in Claude's settings, then forget about it. The MCP runs invisibly in the background whenever Claude needs it.
🛠️ System Requirements
Minimal Requirements - Zero External Dependencies:
Node.js 18+ (LTS recommended) - Download
Optimizely DXP Project with API credentials
That's it! No PowerShell, no Python, no external tools
Supported Platforms:
✅ macOS (Intel & Apple Silicon)
✅ Linux (Ubuntu, Debian, RHEL, etc.)
✅ Windows 10/11 (no PowerShell needed!)
Optional Enhancements:
Redis (optional) - For caching and performance boost
Docker (optional) - For containerized deployment with automation platforms
🚀 Quick Start
Installation
Option 1: npx (Recommended - Always Latest)
No installation needed! Configure Claude to use npx:
Option 2: Global Install (Faster Startup)
Then configure Claude:
Configuration
Single Project Setup
Using Environment Variables:
In Claude's config.json:
Multi-Project / Multi-Tenant Setup
For agencies managing multiple clients:
Then use:
See MULTI_PROJECT_CONFIG.md for complete guide.
Advanced Configuration
Redis Integration (Optional):
HTTP Transport for Automation Platforms:
Download Path Configuration (7-level priority):
Command parameter:
downloadPath=/custom/pathCompact field:
PROJECT="...;logPath=/path"Project + type:
OPTIMIZELY_PROJECT_DOWNLOAD_PATH_LOGS=/pathProject-specific:
OPTIMIZELY_PROJECT_DOWNLOAD_PATH=/pathType-specific:
OPTIMIZELY_DOWNLOAD_PATH_LOGS=/pathGlobal:
OPTIMIZELY_DOWNLOAD_PATH=/pathSmart OS defaults:
~/Downloads/optimizely-mcp/
🛠️ AI-Enabled Solutions
Empower AI to handle your entire DXP lifecycle - from development to production:
1️⃣ Permission & Access Management
2️⃣ Deployments & Content Sync
3️⃣ Real-Time Monitoring & Status
4️⃣ Development Environment Setup
5️⃣ Log Analysis & Downloads
6️⃣ Multi-Project Management
7️⃣ Automation & Integration
🔄 Automation & Integration
HTTP Transport Mode
For n8n, Zapier, Make.com, and custom workflows:
Health Check:
Structured Data Support
24 tools with native
Direct property access in workflows:
Supported Tools:
All deployment tools (list, start, monitor, complete, reset, status)
Database operations (export, status, download, list)
Log operations (download, status, streaming analysis)
Storage operations (list containers, generate SAS, download blobs)
Download management (status, active downloads, history)
Project management (list, switch, current)
System utilities (test connection, health check, version, rate limits)
Webhook Integration
For external automation:
See N8N_INTEGRATION.md for complete automation setup guide.
📚 Documentation
API Reference - Complete tool documentation with parameters and response formats
Multi-Project Configuration - Agency/multi-tenant setup guide
N8N Integration - Automation platform setup and workflows
Client Application Logging - Configure Application Insights
Telemetry - Privacy-focused usage analytics
Windows Setup - Platform-specific notes (no PowerShell needed!)
Changelog - Version history and release notes
📊 Structured Logging
DXP MCP uses structured JSON logging for production observability. All operations log machine-parseable JSON to stdout.
Log Format
Each log entry is a single-line JSON object:
Standard Fields:
timestamp- ISO 8601 timestamp with millisecondslevel- Log level (debug, info, warn, error)message- Human-readable messagecorrelation_id- Links related operations togetherAdditional metadata fields vary by operation
Log Levels
debug- API requests, detailed progress, internal operationsinfo- Significant events (deployment started, export complete)warn- Recoverable issues (retries, fallbacks)error- Failures requiring attention
Querying Logs
CloudWatch Logs Insights:
Datadog:
Splunk:
Correlation IDs
All related operations share a correlation ID. Example flow:
start_deployment- correlation_id:12345-abcmonitor_deployment- correlation_id:12345-abc(same)complete_deployment- correlation_id:12345-abc(same)
Query by correlation ID to see full deployment lifecycle.
Developer Guide
When adding logging to a new tool:
Security
Headers are automatically sanitized to remove:
Authorization tokens
API keys
Authentication credentials
Logs are safe to aggregate and store without exposing secrets.
🔍 Audit Trail
DXP MCP maintains an immutable audit trail of all tool invocations for compliance and observability.
What is Audited
Every tool invocation is logged with:
Timestamp - When the operation occurred
Tool name - Which tool was invoked
Parameters - Input arguments (sanitized to remove secrets)
Result - Operation outcome (success/error)
Duration - How long the operation took
Metadata - Additional context (environment, project, etc.)
Example audit entry:
Storage Location
Audit logs are stored in ./audit-logs/ as JSON Lines files:
Each line is a complete JSON object for easy parsing.
Querying Audit Logs
Via MCP Tool:
Via Command Line:
Retention Policy
Recommended retention periods:
Active logs: Keep 90 days online for queries
Archive: Move logs older than 90 days to cold storage (S3, tape)
Compliance: Retain 7 years for regulated industries (finance, healthcare)
Deletion: After retention period, securely delete per policy
Example archival script:
GDPR and Compliance
PII Handling:
Audit logs may contain user identifiers (email, username)
Support data subject access requests (query by user_id)
Support right to erasure (delete user's audit entries if required)
Data Sanitization:
Passwords, API keys, tokens automatically redacted
Field names containing "password", "secret", "token" are redacted
Authorization headers removed from API request logs
Compliance Features:
Immutable append-only logs (cannot modify/delete individual entries)
Timestamp integrity (ISO 8601 with milliseconds)
Unique correlation IDs for request tracking
Version tracking (MCP server version in each entry)
Configuration
Environment Variables:
Security
Audit logs stored locally (not sent to external services)
File permissions: 600 (owner read/write only)
Directory permissions: 700 (owner access only)
Sensitive data automatically sanitized before logging
Monitoring
Key metrics to track:
Total tool invocations per day
Error rate by tool (errors / total invocations)
Average duration by tool
Failed authentication attempts
Example monitoring query:
🔄 Migration from v3.3x
Major changes in v3.44-v3.46:
Breaking Changes
Tool Renames (v3.42):
Database tools:
db_export*prefix (wasexport_database*)Download tools:
download_*prefix (wasget_*)
PowerShell Removed (v3.44):
No action needed - automatic migration to REST API
3-10x performance improvement
Identical functionality
Deprecated Tools Removed:
download_media,download_assets→ usedownload_blobsOld database tool names → use
db_export*versions
Migration Steps
If upgrading from v3.3x:
Update to latest version:
npm update -g @jaxon-digital/optimizely-dxp-mcpNo configuration changes needed - credentials and environment variables work the same
Test connection:
"test connection"Update any scripts that reference old tool names (see API Reference)
Benefits:
3-10x faster operations (REST API vs PowerShell)
2x faster log analysis (streaming)
45 tools (up from 38)
24 tools with automation support
Zero dependencies - no PowerShell needed
🤝 Support & Community
Getting Help
Documentation: Start with API Reference
Issues: GitHub Issues
Updates: Follow releases on npm
Troubleshooting
Common Issues:
"Cannot find module"
Run
npm run buildto generate dist/ folderVerify dist/index.js exists
"Connection failed"
Check credentials are correct
Verify project has API access enabled
Run
test connectionto diagnose
"Rate limited (429)"
Automatic retry with exponential backoff handles this
Check
get rate limit statusfor quota
HTTP mode issues
Verify port 3001 is available
Check
DXP_MCP_MODE=httpis setTest with
curl http://localhost:3001/health
See N8N_INTEGRATION.md troubleshooting section for automation platform issues.
🛠️ Development
Building from Source
This project uses TypeScript and requires building before running:
Important: The TypeScript source files in lib/ and src/ are compiled to JavaScript in dist/. After editing any .ts files, you MUST run npm run build before testing changes.
Build Output:
dist/index.js- Main entry point (bundled with esbuild)dist/lib/**/*.js- Compiled library modulesBuild happens automatically on
npm install(viapreparehook)Build happens automatically before
npm publish(viaprepublishOnlyhook)
Testing Changes Locally
Development Workflow
Make changes to TypeScript files in
lib/orsrc/Run
npm run buildto compileRun
npm testto verifyCreate PR when tests pass
Project Structure
Related Projects
Log Analyzer MCP - AI-powered log analysis and anomaly detection
Optimizely CMS Modernizer - CMS 11 → CMS 12 migration assistant
Model Context Protocol - Official MCP specification
📜 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
Built with:
Model Context Protocol SDK - MCP protocol implementation
Optimizely DXP API - Deployment REST API
Azure Storage SDK - Blob storage operations
TypeScript - Type-safe development
esbuild - Fast bundling
Made with ❤️ by
Transforming Optimizely DXP from platform to AI-powered workforce