Skip to main content
Glama

CockroachDB MCP Server

bulk_import

Import large datasets from cloud or web storage (S3, Azure, Google, HTTP/HTTPS) into a CockroachDB table using CSV or Avro formats. Define delimiters and header options for tailored data ingestion.

Instructions

Bulk import data into a table from a file (CSV or Avro) stored in cloud or web storage. Supports S3, Azure Blob, Google Storage, HTTP/HTTPS URLs.

Args: table_name (str): Name of the table to import data into. file_url (str): URL to the data file (s3://, azure://, gs://, http://, https://, etc.). format (str): File format ('csv' or 'avro'). delimiter (str): CSV delimiter (default: ','). skip_header (bool): Whether to skip the first row as header (default: True).

Returns: A success message or an error message.

Example: bulk_import(ctx, table_name="users", file_url="s3://bucket/data.csv", format="csv", delimiter=";", skip_header=True)

Input Schema

NameRequiredDescriptionDefault
delimiterNo,
file_urlYes
formatYes
skip_headerNo
table_nameYes

Input Schema (JSON Schema)

{ "properties": { "delimiter": { "default": ",", "title": "Delimiter", "type": "string" }, "file_url": { "title": "File Url", "type": "string" }, "format": { "title": "Format", "type": "string" }, "skip_header": { "default": true, "title": "Skip Header", "type": "boolean" }, "table_name": { "title": "Table Name", "type": "string" } }, "required": [ "table_name", "file_url", "format" ], "title": "bulk_importArguments", "type": "object" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/amineelkouhen/mcp-cockroachdb'

If you have feedback or need assistance with the MCP directory API, please join our Discord server