Set up a new Databricks cluster with custom parameters like cluster name, Spark version, node type, worker count, and auto-termination settings using Databricks MCP Server.
Submit a job to a Dataproc cluster by specifying project ID, region, cluster name, job type, main file, and optional arguments, JAR files, and properties. Supports Spark, PySpark, Hive, Pig, and Hadoop job types.
Simulate liquidity pool rebalancing when impermanent loss exceeds your threshold. Get position adjustment recommendations to optimize your DeFi portfolio on Solana.
Submit and manage batch jobs on Google Cloud Dataproc. Define job types (Spark, PySpark, Spark SQL), include main files, JARs, arguments, and configure properties for efficient job execution.
MCP server that exposes Alpaca Market Data & Broker API as tools, enabling access to financial data like stock bars, assets, market days, and news through the Message Control Protocol.
Enables comprehensive analysis of Apache Spark event logs from S3, HTTP, or local sources, providing performance metrics, resource monitoring, shuffle analysis, and automated optimization recommendations with interactive HTML reports.
Enables AI agents to interact with VnExpress news content through RSS feeds, advanced search with filters, and clean article text extraction for Vietnamese news consumption.