Submit data processing jobs to Google Cloud Dataproc clusters for Spark, PySpark, Hive, and Hadoop workloads using specified project, region, and cluster parameters.
Submit a job to a Dataproc cluster by specifying project ID, region, cluster name, job type, main file, and optional arguments, JAR files, and properties. Supports Spark, PySpark, Hive, Pig, and Hadoop job types.
Create dynamic, interactive charts using Apache ECharts by providing customizable configurations. Export visualizations as PNG, SVG, or raw option formats for seamless integration into web applications.
Retrieve detailed user information including profile, permissions, and authentication sources from Apache Ambari Hadoop clusters to manage access and monitor user activities.
Retrieve detailed user information including profile, permissions, and authentication sources from Hadoop clusters via Apache Ambari API using the MCP-Ambari-API server.