Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
HUE_HOSTYesHue server URL (e.g., https://hue.example.com)
HUE_PASSWORDYesPassword for Hue authentication
HUE_USERNAMEYesUsername for Hue authentication
HUE_VERIFY_SSLNoVerify SSL certificates (default: true)true
HUE_SSL_WARNINGSNoShow SSL warnings (default: false)false

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription
hue_execute_query

Execute a SQL query on Hue and return the results.

This tool executes a SQL statement, waits for completion, and fetches all results. Use this for SELECT queries where you want to retrieve data. Args: statement: The SQL statement to execute (e.g., "SELECT * FROM table LIMIT 100") dialect: SQL dialect to use - 'hive', 'sparksql', or 'impala' (default: 'hive') timeout: Maximum time to wait for query completion in seconds (default: 300) batch_size: Number of rows to fetch per batch for pagination (default: 1000) Returns: QueryResult with headers, rows, and row_count
hue_run_query_to_csv

Execute a SQL query and save results directly to a CSV file.

This is a convenience method that combines query execution with CSV export. Ideal for exporting large result sets to files. Args: statement: The SQL statement to execute filename: Output CSV filename (default: 'results.csv') dialect: SQL dialect - 'hive', 'sparksql', or 'impala' (default: 'hive') batch_size: Number of rows to fetch per batch (default: 1000) Returns: OperationResult indicating success and the output filename
hue_export_and_download

Execute an INSERT OVERWRITE DIRECTORY query and download the results.

This tool is for queries that write output to HDFS (like INSERT OVERWRITE DIRECTORY), then downloads the resulting files to the local filesystem. Args: statement: SQL statement with INSERT OVERWRITE DIRECTORY hdfs_directory: The HDFS directory where results are written local_directory: Local directory to download files to (default: '.') dialect: SQL dialect - 'hive', 'sparksql', or 'impala' (default: 'hive') file_pattern: Optional regex pattern to filter files to download timeout: Maximum wait time in seconds (default: 300) Returns: OperationResult with list of downloaded files in message
hue_list_directory

List files and directories in an HDFS path.

Use this to browse the contents of HDFS directories. Args: directory_path: The HDFS directory path (e.g., '/user/data', '/tmp') page_size: Maximum number of items to return (default: 1000) Returns: DirectoryListing with path, items, and total_count
hue_check_directory_exists

Check if a directory exists in HDFS.

Args: directory_path: The HDFS directory path to check Returns: True if the directory exists, False otherwise
hue_download_file

Download a single file from HDFS.

Args: remote_path: The full path to the file in HDFS local_filename: Local filename to save as (optional, defaults to original name) Returns: OperationResult with the local filename where file was saved
hue_download_directory

Download all files from an HDFS directory.

Args: directory_path: The HDFS directory path to download from local_directory: Local directory to save files to (default: '.') file_pattern: Optional regex pattern to filter files (e.g., '.*\.csv') Returns: OperationResult with list of downloaded files
hue_upload_file

Upload a local file to HDFS.

Args: local_file_path: Path to the local file to upload hdfs_destination: Destination directory in HDFS Returns: OperationResult indicating success

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SpanishST/hueclientrest-mpc'

If you have feedback or need assistance with the MCP directory API, please join our Discord server