Skip to main content
Glama

Keboola Explorer MCP Server

Integrations
  • Connects to Snowflake to provide access to Keboola data stored in Snowflake warehouses, enabling data exploration and preview capabilities.

Keboola MCP Server

Connect your AI agents, MCP clients (Cursor, Claude, Windsurf, VS Code ...) and other AI assistants to Keboola. Expose data, transformations, SQL queries, and job triggers—no glue code required. Deliver the right data to agents when and where they need it.

Overview

Keboola MCP Server is an open-source bridge between your Keboola project and modern AI tools. It turns Keboola features—like storage access, SQL transformations, and job triggers—into callable tools for Claude, Cursor, CrewAI, LangChain, Amazon Q, and more.

Features

  • Storage: Query tables directly and manage table or bucket descriptions
  • Components: Create, List and inspect extractors, writers, data apps, and transformation configurations
  • SQL: Create SQL transformations with natural language
  • Jobs: Run components and transformations, and retrieve job execution details
  • Metadata: Search, read, and update project documentation and object metadata using natural language

Preparations

Make sure you have:

  • Python 3.10+ installed
  • Access to a Keboola project with admin rights
  • Your preferred MCP client (Claude, Cursor, etc.)

Note: Make sure you have uv installed. The MCP client will use it to automatically download and run the Keboola MCP Server. Installing uv:

macOS/Linux:

#if homebrew is not installed on your machine use: # /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # Install using Homebrew brew install uv

Windows:

# Using the installer script powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" # Or using pip pip install uv # Or using winget winget install --id=astral-sh.uv -e

For more installation options, see the official uv documentation.

Before setting up the MCP server, you need three key pieces of information:

KBC_STORAGE_TOKEN

This is your authentication token for Keboola:

For instructions on how to create and manage Storage API tokens, refer to the official Keboola documentation.

Note: If you want the MCP server to have limited access, use custom storage token, if you want the MCP to access everything in your project, use the master token.

KBC_WORKSPACE_SCHEMA

This identifies your workspace in Keboola and is required for SQL queries:

Follow this Keboola guide to get your KBC_WORKSPACE_SCHEMA.

Note: Check Grant read-only access to all Project data option when creating the workspace

Keboola Region

Your Keboola API URL depends on your deployment region. You can determine your region by looking at the URL in your browser when logged into your Keboola project:

RegionAPI URL
AWS North Americahttps://connection.keboola.com
AWS Europehttps://connection.eu-central-1.keboola.com
Google Cloud EUhttps://connection.europe-west3.gcp.keboola.com
Google Cloud UShttps://connection.us-east4.gcp.keboola.com
Azure EUhttps://connection.north-europe.azure.keboola.com

BigQuery-Specific Setup

If your Keboola project uses BigQuery backend, you will need to set GOOGLE_APPLICATION_CREDENTIALS environment variable in addition to KBC_STORAGE_TOKEN and KBC_WORKSPACE_SCHEMA:

  1. Go to your Keboola BigQuery workspace and display its credentials (click Connect button)
  2. Download the credentials file to your local disk. It is a plain JSON file
  3. Set the full path of the downloaded JSON credentials file to GOOGLE_APPLICATION_CREDENTIALS environment variable
  4. This will give your MCP server instance permissions to access your BigQuery workspace in Google Cloud Note: KBC_WORKSPACE_SCHEMA is called Dataset Name in the BigQuery workspace, you simply click connect and copy the Dataset Name.

Running Keboola MCP Server

There are four ways to use the Keboola MCP Server, depending on your needs:

In this mode, Claude or Cursor automatically starts the MCP server for you. You do not need to run any commands in your terminal.

  1. Configure your MCP client (Claude/Cursor) with the appropriate settings
  2. The client will automatically launch the MCP server when needed
Claude Desktop Configuration
  1. Go to Claude (top left corner of your screen) -> Settings → Developer → Edit Config (if you don't see the claude_desktop_config.json, create it)
  2. Add the following configuration:
  3. Restart Claude desktop for changes to take effect
{ "mcpServers": { "keboola": { "command": "uvx", "args": [ "keboola_mcp_server", "--api-url", "https://connection.YOUR_REGION.keboola.com" ], "env": { "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema" } } } }

Note: For BigQuery users, add the following line into "env": {}: "GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"

Config file locations:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
Cursor Configuration
  1. Go to Settings → MCP
  2. Click "+ Add new global MCP Server"
  3. Configure with these settings:
{ "mcpServers": { "keboola": { "command": "uvx", "args": [ "keboola_mcp_server", "--api-url", "https://connection.YOUR_REGION.keboola.com" ], "env": { "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema" } } } }

Note: For BigQuery users, add the following line into "env": {}: "GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"

Cursor Configuration for Windows WSL

When running the MCP server from Windows Subsystem for Linux with Cursor AI, use this configuration:

{ "mcpServers": { "keboola": { "command": "wsl.exe", "args": [ "bash", "-c", "'source /wsl_path/to/keboola-mcp-server/.env", "&&", "/wsl_path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server.cli --transport stdio'" ] } } }

Where /wsl_path/to/keboola-mcp-server/.env file contains environment variables:

export KBC_STORAGE_TOKEN="your_keboola_storage_token" export KBC_WORKSPACE_SCHEMA="your_workspace_schema"

Option B: Local Development Mode

For developers working on the MCP server code itself:

  1. Clone the repository and set up a local environment
  2. Configure Claude/Cursor to use your local Python path:
{ "mcpServers": { "keboola": { "command": "/absolute/path/to/.venv/bin/python", "args": [ "-m", "keboola_mcp_server.cli", "--transport", "stdio", "--api-url", "https://connection.YOUR_REGION.keboola.com" ], "env": { "KBC_STORAGE_TOKEN": "your_keboola_storage_token", "KBC_WORKSPACE_SCHEMA": "your_workspace_schema", } } } }

Note: For BigQuery users, add the following line into "env": {}: "GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"

Option C: Manual CLI Mode (For Testing Only)

You can run the server manually in a terminal for testing or debugging:

# Set environment variables export KBC_STORAGE_TOKEN=your_keboola_storage_token export KBC_WORKSPACE_SCHEMA=your_workspace_schema # For BigQuery users # export GOOGLE_APPLICATION_CREDENTIALS=/full/path/to/credentials.json # Run with uvx (no installation needed) uvx keboola_mcp_server --api-url https://connection.YOUR_REGION.keboola.com # OR, if developing locally python -m keboola_mcp_server.cli --api-url https://connection.YOUR_REGION.keboola.com

Note: This mode is primarily for debugging or testing. For normal use with Claude or Cursor, you do not need to manually run the server.

Option D: Using Docker

docker pull keboola/mcp-server:latest # For Snowflake users docker run -it \ -e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \ -e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \ keboola/mcp-server:latest \ --api-url https://connection.YOUR_REGION.keboola.com # For BigQuery users (add credentials volume mount) # docker run -it \ # -e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \ # -e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \ # -e GOOGLE_APPLICATION_CREDENTIALS="/creds/credentials.json" \ # -v /local/path/to/credentials.json:/creds/credentials.json \ # keboola/mcp-server:latest \ # --api-url https://connection.YOUR_REGION.keboola.com

Do I Need to Start the Server Myself?

ScenarioNeed to Run Manually?Use This Setup
Using Claude/CursorNoConfigure MCP in app settings
Developing MCP locallyNo (Claude starts it)Point config to python path
Testing CLI manuallyYesUse terminal to run
Using DockerYesRun docker container

Using MCP Server

Once your MCP client (Claude/Cursor) is configured and running, you can start querying your Keboola data:

Verify Your Setup

You can start with a simple query to confirm everything is working:

What buckets and tables are in my Keboola project?

Examples of What You Can Do

Data Exploration:

  • "What tables contain customer information?"
  • "Run a query to find the top 10 customers by revenue"

Data Analysis:

  • "Analyze my sales data by region for the last quarter"
  • "Find correlations between customer age and purchase frequency"

Data Pipelines:

  • "Create a SQL transformation that joins customer and order tables"
  • "Start the data extraction job for my Salesforce component"

Compatibility

MCP Client Support

MCP ClientSupport StatusConnection Method
Claude (Desktop & Web)✅ supported, testedstdio
Cursor✅ supported, testedstdio
Windsurf, Zed, Replit✅ Supportedstdio
Codeium, Sourcegraph✅ SupportedHTTP+SSE
Custom MCP Clients✅ SupportedHTTP+SSE or stdio

Supported Tools

Note: Keboola MCP is pre-1.0, so some breaking changes might occur. Your AI agents will automatically adjust to new tools.

CategoryToolDescription
Storageretrieve_bucketsLists all storage buckets in your Keboola project
get_bucket_detailRetrieves detailed information about a specific bucket
retrieve_bucket_tablesReturns all tables within a specific bucket
get_table_detailProvides detailed information for a specific table
update_bucket_descriptionUpdates the description of a bucket
update_column_descriptionUpdates the description for a given column in a table.
update_table_descriptionUpdates the description of a table
SQLquery_tableExecutes custom SQL queries against your data
get_sql_dialectIdentifies whether your workspace uses Snowflake or BigQuery SQL dialect
Componentretrieve_componentsLists all available extractors, writers, and applications
get_component_detailsRetrieves detailed configuration information for a specific component
retrieve_transformationsReturns all transformation configurations in your project
create_sql_transformationCreates a new SQL transformation with custom queries
update_sql_transformationUpdates an existing SQL transformation configuration, sql query, description or disables the configuration
Jobretrieve_jobsLists and filters jobs by status, component, or configuration
get_job_detailReturns comprehensive details about a specific job
start_jobTriggers a component or transformation job to run
Documentationdocs_querySearches Keboola documentation based on natural language queries

Troubleshooting

Common Issues

IssueSolution
Authentication ErrorsVerify KBC_STORAGE_TOKEN is valid
Workspace IssuesConfirm KBC_WORKSPACE_SCHEMA is correct
Connection TimeoutCheck network connectivity

Development

Installation

Basic setup:

uv sync --extra dev

With the basic setup, you can use uv run tox to run tests and check code style.

Recommended setup:

uv sync --extra dev --extra tests --extra codestyle

With the recommended setup, packages for testing and code style checking will be installed which allows IDEs like VsCode or Cursor to check the code or run tests during development.

Integration tests

To run integration tests locally, use uv run tox -e integtests. NOTE: You will need to set the following environment variables:

  • INTEGTEST_STORAGE_API_URL
  • INTEGTEST_STORAGE_TOKEN
  • INTEGTEST_WORKSPACE_SCHEMA

In order to get these values, you need a dedicated Keboola project for integration tests.

Updating uv.lock

Update the uv.lock file if you have added or removed dependencies. Also consider updating the lock with newer dependency versions when creating a release (uv lock --upgrade).

Support and Feedback

⭐ The primary way to get help, report bugs, or request features is by opening an issue on GitHub. ⭐

The development team actively monitors issues and will respond as quickly as possible. For general information about Keboola, please use the resources below.

Resources

Connect

Related MCP Servers

  • A
    security
    A
    license
    A
    quality
    This server provides tools for uploading images and videos directly to Cloudinary using Claude/Cline, facilitating resource management with customizable options like resource type and public ID.
    Last updated -
    1
    71
    4
    JavaScript
    MIT License
    • Apple
  • A
    security
    A
    license
    A
    quality
    This server facilitates interaction with cosense/Scrapbox projects, enabling users to retrieve, list, search, and create pages while supporting various query operations and secure access to private projects.
    Last updated -
    4
    12
    TypeScript
    MIT License
    • Apple
  • -
    security
    A
    license
    -
    quality
    Connects Claude Desktop directly to databases, allowing it to explore database structures, write SQL queries, analyze datasets, and create reports through an API layer with tools for table exploration and query execution.
    Last updated -
    182
    Python
    Mozilla Public License 2.0
    • Apple
  • A
    security
    A
    license
    A
    quality
    A server that allows users to manage documents and perform Claude-powered searches using Needle through the Claude Desktop application.
    Last updated -
    7
    39
    Python
    MIT License
    • Apple

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/keboola/keboola-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server