Skip to main content
Glama
134,924 tools. Last updated 2026-05-14 11:49

"SQL Queries for Snowflake Database Administration" matching MCP tools:

  • Get the full intelligence profile for a brand by its URL slug. Args: slug: URL-safe brand identifier (e.g. "pacvue", "hubspot", "snowflake"). Use search_brands to discover slugs if unsure. Returns: Full brand profile including company overview (3 paragraphs), signal summary, structured FAQs, vertical, tier/rank, website, tags, and source URL. Returns an error dict if the brand is not found.
    Connector
  • Execute a SQL query on Baselight and wait for results (up to 1 minute). The query executes and returns the first 100 rows upon completion, or info about a pending query that needs more time. Use DuckDB syntax only, table format "@username.dataset.table" (double-quoted), SELECT queries only (no DDL/DML), no semicolon terminators, use LIMIT not TOP. If query is still PENDING, use `sdk-get-results` to continue polling. If totalResults > returned rows, use `sdk-get-results` with offset to paginate.
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql_readonly` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql_readonly` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • REQUIRED before stock_data_query, 18 SQL patterns prevent timeouts/wrong results Must be called once per session immediately after get_database_schema. Contains query patterns for time-series selection, return calculations, screening joins, window functions, backtesting, and performance optimization. Time-series queries will timeout or return wrong results without these patterns. After this tool returns, call stock_data_query to execute SQL.
    Connector
  • List all Gmail labels for the authenticated user. Returns both system labels (INBOX, SENT, TRASH, etc.) and user-created labels with message/thread counts. Use this to discover label IDs needed for add_labels, remove_labels, or search_email queries.
    Connector

Matching MCP Servers

  • A
    license
    A
    quality
    -
    maintenance
    Enables AI assistants to securely connect to Snowflake data warehouses and execute SQL queries through natural language interactions. Supports multiple authentication methods and provides formatted query results with built-in security controls.
    Last updated
    1
    2

Matching MCP Connectors

  • Access comprehensive company data including financial records, ownership structures, and contact information. Search for businesses using domains, registration numbers, or LinkedIn profiles to streamline due diligence and lead generation. Retrieve historical financial performance and complex corporate group structures to support informed business analysis.

  • Give your AI agent a phone. Place outbound calls to US businesses to ask, book, or confirm.

  • Describe a specific table. ⚠️ WORKFLOW: ALWAYS call this before writing queries that reference a table. Understanding the schema is essential for writing correct SQL queries. 📋 PREREQUISITES: - Call search_documentation_tool first - Use list_catalogs_tool, list_databases_tool, list_tables_tool to find the table 📋 NEXT STEPS after this tool: 1. Use generate_spatial_query_tool to create SQL using the schema 2. Use execute_query_tool to test the query This tool retrieves the schema of a specified table, including column names and types. It is used to understand the structure of a table before querying or analysis. Parameters ---------- catalog : str The name of the catalog. database : str The name of the database. table : str The name of the table. ctx : Context FastMCP context (injected automatically) Returns ------- TableDescriptionOutput A structured object containing the table schema information. - 'schema': The schema of the table, which may include column names, types, and other metadata. Example Usage for LLM: - When user asks for the schema of a specific table. - Example User Queries and corresponding Tool Calls: - User: "What is the schema of the 'users' table in the 'default' database of the 'wherobots' catalog?" - Tool Call: describe_table('wherobots', 'default', 'users') - User: "Describe the buildings table structure" - Tool Call: describe_table('wherobots_open_data', 'overture', 'buildings')
    Connector
  • Run a read-only SQL query in the project and return the result. Prefer this tool over `execute_sql` if possible. This tool is restricted to only `SELECT` statements. `INSERT`, `UPDATE`, and `DELETE` statements and stored procedures aren't allowed. If the query doesn't include a `SELECT` statement, an error is returned. For information on creating queries, see the [GoogleSQL documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax). Example Queries: -- Count the number of penguins in each island. SELECT island, COUNT(*) AS population FROM bigquery-public-data.ml_datasets.penguins GROUP BY island -- Evaluate a bigquery ML Model. SELECT * FROM ML.EVALUATE(MODEL `my_dataset.my_model`) -- Evaluate BigQuery ML model on custom data SELECT * FROM ML.EVALUATE(MODEL `my_dataset.my_model`, (SELECT * FROM `my_dataset.my_table`)) -- Predict using BigQuery ML model: SELECT * FROM ML.PREDICT(MODEL `my_dataset.my_model`, (SELECT * FROM `my_dataset.my_table`)) -- Forecast data using AI.FORECAST SELECT * FROM AI.FORECAST(TABLE `project.dataset.my_table`, data_col => 'num_trips', timestamp_col => 'date', id_cols => ['usertype'], horizon => 30) Queries executed using the `execute_sql_readonly` tool will have the job label `goog-mcp-server: true` automatically set. Queries are charged to the project specified in the `project_id` field.
    Connector
  • Search BGPT's database of scientific papers by keyword. Args: query: Search terms (e.g. "CRISPR gene editing efficiency") Short, concise queries are best. English language only. Don't include years or filters — use the days_back and num_results params instead. num_results: Number of results to return (1-100, default 16). First 50 results are free, then billed at $0.01/result for paid users. days_back: Only return papers published within the last N days. api_key: Optional: Your Stripe subscription ID for paid access. Get one at https://bgpt.pro/mcp Returns: Papers with title, DOI, Raw Data, methods, results, quality scores, and 25+ metadata fields.
    Connector
  • Create a database user for a Cloud SQL instance. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * When you use the `create_user` tool, specify the type of user: `CLOUD_IAM_USER`, `CLOUD_IAM_SERVICE_ACCOUNT`, or `BUILT_IN`. * By default the newly created user is assigned the `cloudsqlsuperuser` role, unless you specify other database roles explicitly in the request. * You can use a newly created user with the `execute_sql` tool if the user is a currently logged in IAM user. The `execute_sql` tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication. The `create_user` tool has the following limitations: * To create a built-in user with password, use the `password_secret_version` field to provide password using the Google Cloud Secret Manager. The value of `password_secret_version` should be the resource name of the secret version, like `projects/12345/locations/us-central1/secrets/my-password-secret/versions/1` or `projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest`. The caller needs to have `secretmanager.secretVersions.access` permission on the secret version. * The `create_user` tool doesn't support creating a user for SQL Server. To create an IAM user in PostgreSQL: * The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user `example-user@example.com`, you can use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance":"test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user@example.com`. To create an IAM service account in PostgreSQL: * The database username must be created without the `.gserviceaccount.com` suffix even though the full email address for the account is`service-account-name@project-id.iam.gserviceaccount.com`. For example, to create an IAM service account for PostgreSQL you can use the following request format: ``` { "name": "test@test-project.iam", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `test@test-project.iam`. To create an IAM user or IAM service account in MySQL: * When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, `example-user@example.com` becomes `example-user`. * For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance. * For example, to create user for the MySQL IAM user `example-user@example.com`, use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user`. * For example, to create the MySQL IAM service account `service-account-name@project-id.iam.gserviceaccount.com`, use the following request: ``` { "name": "service-account-name@project-id.iam.gserviceaccount.com", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `service-account-name`.
    Connector
  • Execute raw, client-provided SQL queries against an ephemeral database initialized with the provided schema. Returns query results in a simple JSON format with column headers and row data as a 2D array. The database type (SQLite or Postgres) is specified via the databaseType parameter: - SQLITE: In-memory, lightweight, uses standard SQLite syntax - POSTGRES: Temporary isolated schema with dedicated user, uses PostgreSQL syntax and features WHEN TO USE: When you need to run your own hand-written SQL queries to test database behavior or compare the output with ExoQuery results from validateAndRunExoquery. This lets you verify that ExoQuery-generated SQL produces the same results as your expected SQL. INPUT REQUIREMENTS: - query: A valid SQL query (SELECT, INSERT, UPDATE, DELETE, etc.) - schema: SQL schema with CREATE TABLE and INSERT statements to initialize the test database - databaseType: Either "SQLITE" or "POSTGRES" (defaults to SQLITE if not specified) OUTPUT FORMAT: On success, returns JSON with the SQL query and a 2D array of results: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} Output format details: - First array element contains column headers - Subsequent array elements contain row data - All values are returned as strings On error, returns JSON with error message and the attempted query (if available): {"error":"Query execution failed: no such table: USERS","sql":"SELECT * FROM USERS"} Or if schema initialization fails: {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users ...\\n--------","sql":"CREAT TABLE users ..."} EXAMPLE INPUT: Query: SELECT * FROM users ORDER BY id Schema: CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER ); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30); INSERT INTO users (id, name, age) VALUES (2, 'Bob', 25); INSERT INTO users (id, name, age) VALUES (3, 'Charlie', 35); EXAMPLE SUCCESS OUTPUT: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} EXAMPLE ERROR OUTPUT (bad table name): {"error":"Query execution failed: no such table: invalid_table","sql":"SELECT * FROM invalid_table"} EXAMPLE ERROR OUTPUT (bad schema): {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users (id INTEGER)\\n--------\\nCheck that the initialization SQL is valid and compatible with SQLite.","sql":"CREAT TABLE users (id INTEGER)"} COMMON QUERY EXAMPLES: Select all rows: SELECT * FROM users Select specific columns with filtering: SELECT name, age FROM users WHERE age > 25 Aggregate functions: SELECT COUNT(*) as total FROM users Join queries: SELECT u.name, o.total FROM users u JOIN orders o ON u.id = o.user_id Insert data: INSERT INTO users (name, age) VALUES ('David', 40) Update data: UPDATE users SET age = 31 WHERE name = 'Alice' Delete data: DELETE FROM users WHERE age < 25 Count with grouping: SELECT age, COUNT(*) as count FROM users GROUP BY age SCHEMA RULES: - Use standard SQLite syntax - Table names are case-sensitive (use lowercase for simplicity or quote names) - Include INSERT statements to populate test data for meaningful results - Supported data types: INTEGER, TEXT, REAL, BLOB, NULL - Use INTEGER PRIMARY KEY for auto-increment columns - Schema SQL is split on semicolons (;), so each statement after a ';' is executed separately - Avoid semicolons in comments as they will cause statement parsing issues COMPARISON WITH EXOQUERY: This tool is designed to work alongside validateAndRunExoquery for comparison purposes: 1. Use validateAndRunExoquery to run ExoQuery Kotlin code and see the generated SQL + results 2. Use runRawSql with your own hand-written SQL to verify you get the same output 3. Compare the outputs to ensure ExoQuery generates the SQL you expect 4. Test edge cases with plain SQL before writing equivalent ExoQuery code
    Connector
  • Execute any valid SQL statement, including data definition language (DDL), data control language (DCL), data query language (DQL), or data manipulation language (DML) statements, on a Cloud SQL instance. To support the `execute_sql` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For built_in users password_secret_version must be set. * Otherwise, for IAM users, for a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The `execute_sql` tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The `execute_sql` tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Run a query on VirtualFlyBrain using a VFB ID and query type. Supports batch requests — pass an array of IDs to run the same query_type on all of them, or use the queries array for mixed ID/query_type combinations. When multiple queries are provided, results are returned as a JSON object keyed by "ID::query_type". IMPORTANT: Do NOT pass tool names (like "get_term_info" or "search_terms") as query_type — those are separate tools. Valid query_types are returned by get_term_info in the Queries array for each entity. Common query_types include: PaintedDomains, AllAlignedImages, AlignedDatasets, AllDatasets (for templates); SimilarMorphologyTo, NeuronInputsTo, NeuronNeuronConnectivityQuery (for neurons); ListAllAvailableImages, SubclassesOf, PartsOf, NeuronsPartHere, NeuronsSynaptic, ExpressionOverlapsHere (for classes). Available query_types vary by entity type — ALWAYS call get_term_info FIRST to see which queries are available for a given ID, as attempting invalid query types will result in an error message directing you to use get_term_info.
    Connector
  • Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema. ExoQuery is a compile-time SQL query builder that translates Kotlin DSL expressions into SQL. WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data. INPUT REQUIREMENTS: - Complete Kotlin code (same requirements as validateExoquery) - SQL schema with CREATE TABLE and INSERT statements for test data - Data classes MUST exactly match the schema table structure - Column names in data classes must match schema (use @SerialName for snake_case columns) - Must include or or more .runSample() calls in main() to trigger SQL generation and execution (note that .runSample() is NOT or real production use, use .runOn(database) instead) OUTPUT FORMAT: Returns one or more JSON objects, each on its own line. Each object can be: 1. SQL with output (query executed successfully): {"sql": "SELECT u.name FROM \"User\" u", "output": "[(name=Alice), (name=Bob)]"} 2. Output only (e.g., print statements, intermediate results): {"output": "Before: [(id=1, title=Ion Blend Beans)]"} 3. Error output (runtime errors, exceptions): {"outputErr": "java.sql.SQLException: Table \"USERS\" not found"} Multiple results appear when code has multiple queries or print statements: {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25)]"} {"output": "Before:"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": "Rows affected: 1"} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25), (id=2, title=Luna Fuel Flask, unit_price=89.50, in_stock=6)]"} Compilation errors return the same format as validateExoquery: { "errors": { "File.kt": [ { "interval": {"start": {"line": 12, "ch": 10}, "end": {"line": 12, "ch": 15}}, "message": "Type mismatch: inferred type is String but Int was expected", "severity": "ERROR", "className": "ERROR" } ] } } Runtime Errors can have the following format: { "errors" : { "File.kt" : [ ] }, "exception" : { "message" : "[SQLITE_ERROR] SQL error or missing database (no such table: User)", "fullName" : "org.sqlite.SQLiteException", "stackTrace" : [ { "className" : "org.sqlite.core.DB", "methodName" : "newSQLException", "fileName" : "DB.java", "lineNumber" : 1179 }, ...] }, "text" : "<outStream><outputObject>\n{\"sql\": \"SELECT x.id, x.name, x.age FROM User x\"}\n</outputObject>\n</outStream>" } If there was a SQL query generated before the error, it will appear in the "text" field output stream. EXAMPLE INPUT CODE: ```kotlin import io.exoquery.* import kotlinx.serialization.Serializable import kotlinx.serialization.SerialName @Serializable data class User(val id: Int, val name: String, val age: Int) @Serializable data class Order(val id: Int, @SerialName("user_id") val userId: Int, val total: Int) val userOrders = sql.select { val u = from(Table<User>()) val o = join(Table<Order>()) { o -> o.userId == u.id } Triple(u.name, o.total, u.age) } fun main() = userOrders.buildPrettyFor.Sqlite().runSample() ``` EXAMPLE INPUT SCHEMA: ```sql CREATE TABLE "User" (id INT, name VARCHAR(100), age INT); CREATE TABLE "Order" (id INT, user_id INT, total INT); INSERT INTO "User" (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25); INSERT INTO "Order" (id, user_id, total) VALUES (1, 1, 100), (2, 1, 200), (3, 2, 150); ``` EXAMPLE SUCCESS OUTPUT: {"sql": "SELECT u.name AS first, o.total AS second, u.age AS third FROM \"User\" u INNER JOIN \"Order\" o ON o.user_id = u.id", "output": "[(first=Alice, second=100, third=30), (first=Alice, second=200, third=30), (first=Bob, second=150, third=25)]"} EXAMPLE WITH MULTIPLE OPERATIONS (insert with before/after check): {"output": "Before:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans)]"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": ""} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans), (id=2, title=Luna Fuel Flask)]"} EXAMPLE RUNTIME ERROR (if a user divided by zero): {"outputErr": "Exception in thread "main" java.lang.ArithmeticException: / by zero"} KEY PATTERNS: (See validateExoquery for complete pattern reference) Summary of most common patterns: - Filter: sql { Table<T>().filter { x -> x.field == value } } - Select: sql.select { val x = from(Table<T>()); where { ... }; x } - Join: sql.select { val a = from(Table<A>()); val b = join(Table<B>()) { b -> b.aId == a.id }; Pair(a, b) } - Left join: joinLeft(Table<T>()) { ... } returns nullable - Insert: sql { insert<T> { setParams(obj).excluding(id) } } - Update: sql { update<T>().set { it.field to value }.where { it.id == x } } - Delete: sql { delete<T>().where { it.id == x } } SCHEMA RULES: - Table names should match data class names (case-sensitive, use quotes for exact match) - Column names must match @SerialName values or property names - Include realistic test data to verify query logic - Sqlite database syntax (mostly compatible with standard SQL) COMMON PATTERNS: - JSON columns: Use VARCHAR for storage, @SqlJsonValue on the nested data class - Auto-increment IDs: Use INTEGER PRIMARY KEY - Nullable columns: Use Type? in Kotlin, allow NULL in schema
    Connector
  • Build a Tableau dashboard from a Microsoft SQL Server table (end-to-end). Pipeline: MSSQL → schema inference → chart suggestion → workbook creation → live MSSQL connection → .twb output. Requires pyodbc for schema inference and ODBC Driver 17 for SQL Server. IMPORTANT FOR AI AGENTS: see ``csv_to_dashboard`` — auto-charts come from rules, not natural-language requests. Use ``required_charts`` to guarantee specific charts, ``reference_image`` for image-based styling, and cite the returned manifest dict when describing results. Args: server_host: MSSQL server hostname. dbname: Database name. table_name: Table to visualize. username: Database username (ignored if trusted_connection=True). password: Database password (used for schema inference only). port: Server port (default 1433). trusted_connection: Use Windows Authentication instead of SQL auth. output_path: Output .twb path (defaults to <table>_dashboard.twb). dashboard_title: Dashboard title. max_charts: Maximum charts (0 = use rules default). template_path: TWB template path. theme: Theme preset name. rules_yaml: Optional YAML string with dashboard rules overrides. required_charts: See ``csv_to_dashboard.required_charts``. reference_image: See ``csv_to_dashboard.reference_image``. Returns: Structured manifest dict describing what was actually built.
    Connector
  • Configures a Marketing Mix Modeling (MMM) study for a project. **What is MMM?** Marketing Mix Modeling measures the real contribution of each marketing channel (Google Ads, Meta, etc.) on a KPI (leads, revenue, conversions), accounting for external factors (seasonality, holidays, promotions). **Recommended workflow:** 1. Use get_schema_context to discover the project's tables/columns 2. Generate input SQL queries (KPI, channels, exogenous variables) 3. **Validate each query before calling setup_mmm:** Use execute_query to run a COUNT(*) wrapper on each input query (e.g., SELECT COUNT(*) FROM (<query>)). If any query returns 0 rows, do NOT include it in setup_mmm — warn the user that the data source is empty and ask whether to proceed without it or fix the query. 4. Call setup_mmm with the validated SQL queries — the study is automatically launched after setup 5. Do NOT call run_mmm after setup_mmm: the first run is triggered automatically **Important:** run_mmm is only needed to RE-RUN an existing study later, not after initial setup. **Input queries format:** Each query must return a "time" column (DATE) and the requested metrics. - role="kpi": a "kpi" column (the target KPI) - role="channel": "spend" and "impressions" columns + channel_name - role="exogenous": columns named after the exogenous variables + columns[] **Granularity**: "weekly" is recommended (MMM standard). SQL should aggregate by week. **Important**: Adapt the SQL dialect to the project's data warehouse type (BigQuery, Snowflake, Redshift).
    Connector
  • Execute a read-only SQL query against the target connection. ONLY SELECT / WITH / EXPLAIN permitted. Write dialect-appropriate SQL for the connection's engine — use PostgreSQL syntax for postgres connections (`SELECT NOW()`, `LIMIT`, `ILIKE`), T-SQL for mssql (`SELECT GETDATE()`, `TOP N`, `LIKE`), MySQL for mysql (`SELECT NOW()`, `LIMIT`). Response meta includes `connection` + `dialect` so you know which syntax worked; reuse that dialect in follow-up calls. Default LIMIT 100 unless the user asks for all rows.
    Connector
  • Execute a SQL query on a site's database. Supports SELECT, INSERT, UPDATE, DELETE, and DDL statements. Results are limited to 1000 rows for SELECT queries. Requires: API key with write scope. Args: slug: Site identifier database: Database name query: SQL query string Returns: {"columns": ["id", "title"], "rows": [[1, "Hello"], ...], "affected_rows": 0, "query_time_ms": 12}
    Connector
  • WHEN: developer needs correct X++ select or T-SQL for D365 tables with proper joins. Triggers: 'X++ select', 'generate a query', 'SQL for', 'join with', 'how to query', 'générer une requête', 'write a select statement', 'select from', 'X++ query for', 'requête X++', 'écrire une select'. Generate both X++ select statements and equivalent T-SQL queries for D365 F&O tables. Uses real field names, relations, and indexes from the knowledge base to produce correct joins. Supports: field selection, multi-table joins (auto-detects relations), WHERE filters, ORDER BY, TOP/firstonly, cross-company. Also accepts natural language descriptions like 'find all open sales orders for customer 1001 with CustTable join'. [!] For multi-table joins, call find_related_objects (or get_relation_graph if the relation index is loaded) FIRST to get the correct FK relations -- this tool will then produce accurate join conditions. [!] The generated X++ is a template -- adapt it to your custom code context before using in production. Returns side-by-side X++ and SQL with explanations.
    Connector
  • List all databases in a given catalog. ⚠️ WORKFLOW: Call this after list_catalogs_tool to explore a specific catalog. 📋 PREREQUISITES: - Call search_documentation_tool first to understand what you're looking for - Call list_catalogs_tool to discover available catalogs 📋 NEXT STEPS after this tool: 1. Use list_tables_tool to find tables in a database 2. Use describe_table_tool to get table schemas before writing queries This tool retrieves all databases within a specified catalog. Parameters ---------- catalog : str The name of the catalog. ctx : Context FastMCP context (injected automatically) Returns ------- DatabaseListOutput A structured object containing database information. - 'catalog': The catalog name. - 'databases': List of database names. - 'count': Number of databases found. Example Usage for LLM: - When user asks for a specific catalog's databases. - Example User Queries and corresponding Tool Calls: - User: "List all databases in the 'wherobots' catalog." - Tool Call: list_databases('wherobots') - User: "What databases are in the foursquare catalog?" - Tool Call: list_databases('foursquare')
    Connector