Skip to main content
Glama
114,411 tools. Last updated 2026-04-21 09:37
  • Describe a specific table. ⚠️ WORKFLOW: ALWAYS call this before writing queries that reference a table. Understanding the schema is essential for writing correct SQL queries. 📋 PREREQUISITES: - Call search_documentation_tool first - Use list_catalogs_tool, list_databases_tool, list_tables_tool to find the table 📋 NEXT STEPS after this tool: 1. Use generate_spatial_query_tool to create SQL using the schema 2. Use execute_query_tool to test the query This tool retrieves the schema of a specified table, including column names and types. It is used to understand the structure of a table before querying or analysis. Parameters ---------- catalog : str The name of the catalog. database : str The name of the database. table : str The name of the table. ctx : Context FastMCP context (injected automatically) Returns ------- TableDescriptionOutput A structured object containing the table schema information. - 'schema': The schema of the table, which may include column names, types, and other metadata. Example Usage for LLM: - When user asks for the schema of a specific table. - Example User Queries and corresponding Tool Calls: - User: "What is the schema of the 'users' table in the 'default' database of the 'wherobots' catalog?" - Tool Call: describe_table('wherobots', 'default', 'users') - User: "Describe the buildings table structure" - Tool Call: describe_table('wherobots_open_data', 'overture', 'buildings')
    Connector
  • Builds the schema context for generating BigQuery SQL queries. Returns relevant tables with their fields and semantic definitions. Call this function with the user's question before writing SQL. **IMPORTANT for SQL queries**: Use ONLY the `dataset.table` format (e.g., `prod_google_ads_v2.campaign_stats`). NEVER add a project_id before table names. The `full_name` field of each table already contains the complete name to use in your queries.
    Connector
  • Execute a SQL query on a site's database. Supports SELECT, INSERT, UPDATE, DELETE, and DDL statements. Results are limited to 1000 rows for SELECT queries. Requires: API key with write scope. Args: slug: Site identifier database: Database name query: SQL query string Returns: {"columns": ["id", "title"], "rows": [[1, "Hello"], ...], "affected_rows": 0, "query_time_ms": 12}
    Connector
  • Execute raw, client-provided SQL queries against an ephemeral database initialized with the provided schema. Returns query results in a simple JSON format with column headers and row data as a 2D array. The database type (SQLite or Postgres) is specified via the databaseType parameter: - SQLITE: In-memory, lightweight, uses standard SQLite syntax - POSTGRES: Temporary isolated schema with dedicated user, uses PostgreSQL syntax and features WHEN TO USE: When you need to run your own hand-written SQL queries to test database behavior or compare the output with ExoQuery results from validateAndRunExoquery. This lets you verify that ExoQuery-generated SQL produces the same results as your expected SQL. INPUT REQUIREMENTS: - query: A valid SQL query (SELECT, INSERT, UPDATE, DELETE, etc.) - schema: SQL schema with CREATE TABLE and INSERT statements to initialize the test database - databaseType: Either "SQLITE" or "POSTGRES" (defaults to SQLITE if not specified) OUTPUT FORMAT: On success, returns JSON with the SQL query and a 2D array of results: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} Output format details: - First array element contains column headers - Subsequent array elements contain row data - All values are returned as strings On error, returns JSON with error message and the attempted query (if available): {"error":"Query execution failed: no such table: USERS","sql":"SELECT * FROM USERS"} Or if schema initialization fails: {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users ...\\n--------","sql":"CREAT TABLE users ..."} EXAMPLE INPUT: Query: SELECT * FROM users ORDER BY id Schema: CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER ); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30); INSERT INTO users (id, name, age) VALUES (2, 'Bob', 25); INSERT INTO users (id, name, age) VALUES (3, 'Charlie', 35); EXAMPLE SUCCESS OUTPUT: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} EXAMPLE ERROR OUTPUT (bad table name): {"error":"Query execution failed: no such table: invalid_table","sql":"SELECT * FROM invalid_table"} EXAMPLE ERROR OUTPUT (bad schema): {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users (id INTEGER)\\n--------\\nCheck that the initialization SQL is valid and compatible with SQLite.","sql":"CREAT TABLE users (id INTEGER)"} COMMON QUERY EXAMPLES: Select all rows: SELECT * FROM users Select specific columns with filtering: SELECT name, age FROM users WHERE age > 25 Aggregate functions: SELECT COUNT(*) as total FROM users Join queries: SELECT u.name, o.total FROM users u JOIN orders o ON u.id = o.user_id Insert data: INSERT INTO users (name, age) VALUES ('David', 40) Update data: UPDATE users SET age = 31 WHERE name = 'Alice' Delete data: DELETE FROM users WHERE age < 25 Count with grouping: SELECT age, COUNT(*) as count FROM users GROUP BY age SCHEMA RULES: - Use standard SQLite syntax - Table names are case-sensitive (use lowercase for simplicity or quote names) - Include INSERT statements to populate test data for meaningful results - Supported data types: INTEGER, TEXT, REAL, BLOB, NULL - Use INTEGER PRIMARY KEY for auto-increment columns - Schema SQL is split on semicolons (;), so each statement after a ';' is executed separately - Avoid semicolons in comments as they will cause statement parsing issues COMPARISON WITH EXOQUERY: This tool is designed to work alongside validateAndRunExoquery for comparison purposes: 1. Use validateAndRunExoquery to run ExoQuery Kotlin code and see the generated SQL + results 2. Use runRawSql with your own hand-written SQL to verify you get the same output 3. Compare the outputs to ensure ExoQuery generates the SQL you expect 4. Test edge cases with plain SQL before writing equivalent ExoQuery code
    Connector
  • Guided reporting and visualization for Senzing entity resolution results. Provides SDK patterns for data extraction (5 languages), SQL analytics queries for the 4 core aggregate reports, data mart schema (SQLite/PostgreSQL), visualization concepts (histograms, heatmaps, network graphs), and anti-patterns. Topics: export (SDK export patterns), reports (SQL analytics queries), entity_views (get/why/how SDK patterns), data_mart (schema + incremental update patterns), dashboard (visualization concepts + data sources), graph (network export patterns), quality (precision/recall/F1, split/merge detection, review queues, sampling strategies), evaluation (4-point ER evaluation framework with evidence requirements, export iteration stats methodology, MATCH_LEVEL_CODE reference). Returns decision trees when language/scale not specified.
    Connector
  • List all tables in a given database. ⚠️ WORKFLOW: Call this after list_databases_tool to find tables in a database. 📋 PREREQUISITES: - Call search_documentation_tool first - Call list_catalogs_tool and list_databases_tool to navigate to the database 📋 NEXT STEPS after this tool: 1. Use describe_table_tool to get the schema of tables you want to query 2. Use generate_spatial_query_tool to create SQL using the schema 3. Use execute_query_tool to test the query This tool retrieves all tables within a specified database in a catalog. It is used to explore the final level of the data hierarchy before accessing table schemas. Parameters ---------- catalog : str The name of the catalog. database : str The name of the database. ctx : Context FastMCP context (injected automatically) Returns ------- TableListOutput A structured object containing table information. - 'catalog': The catalog name. - 'database': The database name. - 'tables': List of table names. - 'count': Number of tables found. Example Usage for LLM: - When user asks for a specific database's tables. - Example User Queries and corresponding Tool Calls: - User: "List all tables in the 'default' database of the 'wherobots' catalog." - Tool Call: list_tables('wherobots', 'default') - User: "What tables are in the overture database?" - Tool Call: list_tables('wherobots_open_data', 'overture')
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Access comprehensive company data including financial records, ownership structures, and contact information. Search for businesses using domains, registration numbers, or LinkedIn profiles to streamline due diligence and lead generation. Retrieve historical financial performance and complex corporate group structures to support informed business analysis.

  • MCP server for the RPG-Schema.org definition and helping the usage of RPG-Schemas in TTRPG manuals

  • Run a SQL query in the project and return the result. Prefer the `execute_sql_readonly` tool if possible. This tool can execute any query that bigquery supports including: * SQL Queries (SELECT, INSERT, UPDATE, DELETE, CREATE, etc.) * AI/ML functions like AI.FORECAST, ML.EVALUATE, ML.PREDICT * Any other query that bigquery supports. Example Queries: -- Insert data into a table. INSERT INTO `my_project.my_dataset`.my_table (name, age) VALUES ('Alice', 30); -- Create a table. CREATE TABLE `my_project.my_dataset`.my_table ( name STRING, age INT64); -- DELETE data from a table. DELETE FROM `my_project.my_dataset`.my_table WHERE name = 'Alice'; -- Create Dataset CREATE SCHEMA `my_project.my_dataset` OPTIONS (location = 'US'); -- Drop table DROP TABLE `my_project.my_dataset`.my_table; -- Drop dataset DROP SCHEMA `my_project.my_dataset`; -- Create Model CREATE OR REPLACE MODEL `my_project.my_dataset.my_model` OPTIONS ( model_type = 'LINEAR_REG' LS_INIT_LEARN_RATE=0.15, L1_REG=1, MAX_ITERATIONS=5, DATA_SPLIT_METHOD='SEQ', DATA_SPLIT_EVAL_FRACTION=0.3, DATA_SPLIT_COL='timestamp') AS SELECT col1, col2, timestamp, label FROM `my_project.my_dataset.my_table`; Queries executed using the `execute_sql` tool will have the job label `goog-mcp-server: true` automatically set. Queries are charged to the project specified in the `project_id` field.
    Connector
  • Lookup a stream object by its source object identifier. **Parameters:** * The 'parent' parameter is the name of the stream in the form: 'projects/{project name}/locations/{location}/streams/{stream name}', for example: 'projects/my-project/locations/us-central1/streams/my-stream'. * The 'source_object_identifier' parameter is the source database object identifier. Different source databases have different identifier formats. Examples: * Oracle, PostgreSQL, SQL Server and Spanner databases the identifier is 'schema' and 'table'. * MySQL databases the identifier is 'database' and 'table'.
    Connector
  • Rollback a project to a previous version. ⚠️ WARNING: This reverts schema AND code to the specified commit. Database data is NOT rolled back. Use get_version_history to find the commit SHA of the version you want to rollback to. After rollback, use get_job_status to monitor the redeployment. Rollback is useful when a schema change breaks deployment.
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Deploy a project to the staging environment. This triggers: (1) Schema validation, (2) Docker image build, (3) GitHub commit, (4) Kubernetes deployment, (5) Database migrations. The operation is ASYNCHRONOUS - it returns immediately with a job_id. Use get_job_status with the job_id to monitor progress. Deployment typically takes 2-5 minutes depending on schema complexity. If deployment fails, check: (1) Schema format is FLAT (no 'fields' nesting), (2) Every field has a 'type' property, (3) Foreign keys reference existing tables, (4) No PostgreSQL reserved words in table/field names. Use get_project_info to see if the deployment succeeded.
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `read_only_execute_sql` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Run a read-only SQL query in the project and return the result. Prefer this tool over `execute_sql` if possible. This tool is restricted to only `SELECT` statements. `INSERT`, `UPDATE`, and `DELETE` statements and stored procedures aren't allowed. If the query doesn't include a `SELECT` statement, an error is returned. For information on creating queries, see the [GoogleSQL documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax). Example Queries: -- Count the number of penguins in each island. SELECT island, COUNT(*) AS population FROM bigquery-public-data.ml_datasets.penguins GROUP BY island -- Evaluate a bigquery ML Model. SELECT * FROM ML.EVALUATE(MODEL `my_dataset.my_model`) -- Evaluate BigQuery ML model on custom data SELECT * FROM ML.EVALUATE(MODEL `my_dataset.my_model`, (SELECT * FROM `my_dataset.my_table`)) -- Predict using BigQuery ML model: SELECT * FROM ML.PREDICT(MODEL `my_dataset.my_model`, (SELECT * FROM `my_dataset.my_table`)) -- Forecast data using AI.FORECAST SELECT * FROM AI.FORECAST(TABLE `project.dataset.my_table`, data_col => 'num_trips', timestamp_col => 'date', id_cols => ['usertype'], horizon => 30) Queries executed using the `execute_sql_readonly` tool will have the job label `goog-mcp-server: true` automatically set. Queries are charged to the project specified in the `project_id` field.
    Connector
  • Full-text search across recall reasons and product descriptions using PostgreSQL text search. Finds recalls mentioning specific terms (e.g. 'salmonella contamination', 'mislabeled', 'sterility'). Supports multi-word queries ranked by relevance. Filter by classification, product_type, or date range. Related: fda_search_enforcement (search by company name, classification, status), fda_recall_facility_trace (trace a recall to its manufacturing facility).
    Connector
  • Get the JSON schema definition of a project in FLAT format. Returns the schema structure where each table name maps directly to field definitions. This is the same format required for create_project and update_schema. USE CASES: Review current schema before making updates, copy schema as template for new projects, verify schema structure after deployment, learn the correct schema format by example. The returned schema will be in FLAT format: {table_name: {field_name: {type, properties}}}
    Connector
  • Execute any valid SQL statement, including data definition language (DDL), data control language (DCL), data query language (DQL), or data manipulation language (DML) statements, on a Cloud SQL instance. To support the `execute_sql` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For built_in users password_secret_version must be set. * Otherwise, for IAM users, for a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The `execute_sql` tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The `execute_sql` tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Create a database user for a Cloud SQL instance. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * When you use the `create_user` tool, specify the type of user: `CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`. * By default the newly created user is assigned the `cloudsqlsuperuser` role, unless you specify other database roles explicitly in the request. * You can use a newly created user with the `execute_sql` tool if the user is a currently logged in IAM user. The `execute_sql` tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication. The `create_user` tool has the following limitations: * To create a built-in user with password, use the `password_secret_version` field to provide password using the Google Cloud Secret Manager. The value of `password_secret_version` should be the resource name of the secret version, like `projects/12345/locations/us-central1/secrets/my-password-secret/versions/1` or `projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest`. The caller needs to have `secretmanager.secretVersions.access` permission on the secret version. This feature is available only to projects on an allowlist. * The `create_user` tool doesn't support creating a user for SQL Server. To create an IAM user in PostgreSQL: * The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user `example-user@example.com`, you can use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance":"test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user@example.com`. To create an IAM service account in PostgreSQL: * The database username must be created without the `.gserviceaccount.com` suffix even though the full email address for the account is`service-account-name@project-id.iam.gserviceaccount.com`. For example, to create an IAM service account for PostgreSQL you can use the following request format: ``` { "name": "test@test-project.iam", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `test@test-project.iam`. To create an IAM user or IAM service account in MySQL: * When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, `example-user@example.com` becomes `example-user`. * For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance. * For example, to create user for the MySQL IAM user `example-user@example.com`, use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user`. * For example, to create the MySQL IAM service account `service-account-name@project-id.iam.gserviceaccount.com`, use the following request: ``` { "name": "service-account-name@project-id.iam.gserviceaccount.com", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `service-account-name`.
    Connector
  • WHEN: developer needs correct X++ select or T-SQL for D365 tables with proper joins. Triggers: 'X++ select', 'generate a query', 'SQL for', 'join with', 'how to query', 'générer une requête', 'write a select statement', 'select from', 'X++ query for', 'requête X++', 'écrire une select'. Generate both X++ select statements and equivalent T-SQL queries for D365 F&O tables. Uses real field names, relations, and indexes from the knowledge base to produce correct joins. Supports: field selection, multi-table joins (auto-detects relations), WHERE filters, ORDER BY, TOP/firstonly, cross-company. Also accepts natural language descriptions like 'find all open sales orders for customer 1001 with CustTable join'. [!] For multi-table joins, call find_related_objects (or get_relation_graph if the relation index is loaded) FIRST to get the correct FK relations -- this tool will then produce accurate join conditions. [!] The generated X++ is a template -- adapt it to your custom code context before using in production. Returns side-by-side X++ and SQL with explanations.
    Connector
  • Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema. ExoQuery is a compile-time SQL query builder that translates Kotlin DSL expressions into SQL. WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data. INPUT REQUIREMENTS: - Complete Kotlin code (same requirements as validateExoquery) - SQL schema with CREATE TABLE and INSERT statements for test data - Data classes MUST exactly match the schema table structure - Column names in data classes must match schema (use @SerialName for snake_case columns) - Must include or or more .runSample() calls in main() to trigger SQL generation and execution (note that .runSample() is NOT or real production use, use .runOn(database) instead) OUTPUT FORMAT: Returns one or more JSON objects, each on its own line. Each object can be: 1. SQL with output (query executed successfully): {"sql": "SELECT u.name FROM \"User\" u", "output": "[(name=Alice), (name=Bob)]"} 2. Output only (e.g., print statements, intermediate results): {"output": "Before: [(id=1, title=Ion Blend Beans)]"} 3. Error output (runtime errors, exceptions): {"outputErr": "java.sql.SQLException: Table \"USERS\" not found"} Multiple results appear when code has multiple queries or print statements: {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25)]"} {"output": "Before:"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": "Rows affected: 1"} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25), (id=2, title=Luna Fuel Flask, unit_price=89.50, in_stock=6)]"} Compilation errors return the same format as validateExoquery: { "errors": { "File.kt": [ { "interval": {"start": {"line": 12, "ch": 10}, "end": {"line": 12, "ch": 15}}, "message": "Type mismatch: inferred type is String but Int was expected", "severity": "ERROR", "className": "ERROR" } ] } } Runtime Errors can have the following format: { "errors" : { "File.kt" : [ ] }, "exception" : { "message" : "[SQLITE_ERROR] SQL error or missing database (no such table: User)", "fullName" : "org.sqlite.SQLiteException", "stackTrace" : [ { "className" : "org.sqlite.core.DB", "methodName" : "newSQLException", "fileName" : "DB.java", "lineNumber" : 1179 }, ...] }, "text" : "<outStream><outputObject>\n{\"sql\": \"SELECT x.id, x.name, x.age FROM User x\"}\n</outputObject>\n</outStream>" } If there was a SQL query generated before the error, it will appear in the "text" field output stream. EXAMPLE INPUT CODE: ```kotlin import io.exoquery.* import kotlinx.serialization.Serializable import kotlinx.serialization.SerialName @Serializable data class User(val id: Int, val name: String, val age: Int) @Serializable data class Order(val id: Int, @SerialName("user_id") val userId: Int, val total: Int) val userOrders = sql.select { val u = from(Table<User>()) val o = join(Table<Order>()) { o -> o.userId == u.id } Triple(u.name, o.total, u.age) } fun main() = userOrders.buildPrettyFor.Sqlite().runSample() ``` EXAMPLE INPUT SCHEMA: ```sql CREATE TABLE "User" (id INT, name VARCHAR(100), age INT); CREATE TABLE "Order" (id INT, user_id INT, total INT); INSERT INTO "User" (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25); INSERT INTO "Order" (id, user_id, total) VALUES (1, 1, 100), (2, 1, 200), (3, 2, 150); ``` EXAMPLE SUCCESS OUTPUT: {"sql": "SELECT u.name AS first, o.total AS second, u.age AS third FROM \"User\" u INNER JOIN \"Order\" o ON o.user_id = u.id", "output": "[(first=Alice, second=100, third=30), (first=Alice, second=200, third=30), (first=Bob, second=150, third=25)]"} EXAMPLE WITH MULTIPLE OPERATIONS (insert with before/after check): {"output": "Before:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans)]"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": ""} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans), (id=2, title=Luna Fuel Flask)]"} EXAMPLE RUNTIME ERROR (if a user divided by zero): {"outputErr": "Exception in thread "main" java.lang.ArithmeticException: / by zero"} KEY PATTERNS: (See validateExoquery for complete pattern reference) Summary of most common patterns: - Filter: sql { Table<T>().filter { x -> x.field == value } } - Select: sql.select { val x = from(Table<T>()); where { ... }; x } - Join: sql.select { val a = from(Table<A>()); val b = join(Table<B>()) { b -> b.aId == a.id }; Pair(a, b) } - Left join: joinLeft(Table<T>()) { ... } returns nullable - Insert: sql { insert<T> { setParams(obj).excluding(id) } } - Update: sql { update<T>().set { it.field to value }.where { it.id == x } } - Delete: sql { delete<T>().where { it.id == x } } SCHEMA RULES: - Table names should match data class names (case-sensitive, use quotes for exact match) - Column names must match @SerialName values or property names - Include realistic test data to verify query logic - Sqlite database syntax (mostly compatible with standard SQL) COMMON PATTERNS: - JSON columns: Use VARCHAR for storage, @SqlJsonValue on the nested data class - Auto-increment IDs: Use INTEGER PRIMARY KEY - Nullable columns: Use Type? in Kotlin, allow NULL in schema
    Connector
  • Scan code for injection vulnerabilities: SQL injection, command injection, and path traversal. Detects unsafe patterns like string concatenation in SQL queries, unsanitized input in shell commands, and user input in file paths. Returns: total (count), by_severity (CRITICAL/HIGH/MEDIUM/LOW), findings array. Read-only, code is not stored. For hardcoded secrets, use check_secrets instead.
    Connector
  • Fetch the full column schema for a CDC dataset — names, data types, descriptions, row count, and last-updated timestamp. Essential before writing SoQL queries against unfamiliar datasets.
    Connector
  • Run the SQL query on a Wherobots catalog. ⚠️ WORKFLOW: ALWAYS test queries with this tool before generating code. Use limit=10 or limit=100 for initial testing to validate query logic. 📋 PREREQUISITES (required for reliable results): - search_documentation_tool: Understand correct SQL syntax - describe_table_tool: Verify table schema and column names - generate_spatial_query_tool: Generate the SQL (recommended) 📋 NEXT STEPS after this tool: - If query SUCCEEDS: Increase limit or remove it for full results - If query FAILS: Use sql_debug_workflow prompt, check schema, retry - Only generate application code AFTER SQL is validated and working This tool allows users to execute SQL queries against the Wherobots catalog. It is typically used for data retrieval and analysis. Supports pagination for large result sets. *** Remember: - Ensure the output can be serialized to JSON. - If query execution fails, analyze the error and provide a meaningful message to the user. - Use limit and offset parameters for large result sets to avoid timeouts. - Queries have a maximum execution time (configurable via settings or x-query-timeout header). - The runtime ID can be customized via the x-runtime-id header (default: tiny). - The runtime region can be customized via the x-runtime-region header (default: aws-us-west-2). - The server sends periodic heartbeat messages during long-running queries to keep the connection alive. - If the client disconnects, the query will be cancelled to avoid wasting resources. *** Parameters ---------- query : str The SQL query to be executed. ctx : Context FastMCP context (injected automatically) limit : int | None Optional maximum number of rows to return (default: 1000, max: 10000) offset : int | None Optional number of rows to skip for pagination (default: 0) Returns ------- QueryExecutionOutput A structured object containing the results of the query execution. - 'success': Whether the query executed successfully - 'row_count': Number of rows returned - 'data': Query results as a list of dictionaries - 'query': The executed SQL query - 'error': Whether an error occurred - 'error_type': Type of error (if any) - 'error_message': Error message (if any) - 'execution_status': Status of execution Example Usage for LLM: - When user asks to run a specific SQL query. - Example User Queries and corresponding Tool Calls: - User: "Run the following SQL query: SELECT * FROM buildings WHERE state = 'California';" - Tool Call: execute_query('SELECT * FROM buildings WHERE state = 'California';') - User: "Execute this query with only the first 100 results" - Tool Call: execute_query('SELECT * FROM large_table;', limit=100) - User: "Get the next page of results" - Tool Call: execute_query('SELECT * FROM large_table;', limit=100, offset=100)
    Connector
  • Delete a table. The request requires the 'name' field to be set in the format 'projects/{project}/instances/{instance}/tables/{table}'. Example: { "name": "projects/my-project/instances/my-instance/tables/my-table" } The table must exist. You can use `list_tables` to verify. Before executing the deletion, you MUST confirm the action with the user by stating the full table name and asking for "yes/no" confirmation.
    Connector
  • Checks if a Cloud SQL PostgreSQL instance is ready for a major version upgrade to the specified target version. The `target_database_version` MUST be provided in the request (e.g., `POSTGRES_15`). This tool helps identify potential issues *before* attempting the actual upgrade, reducing the risk of failure or downtime. This tool is only supported for PostgreSQL instances and does not run on read replicas. The precheck typically evaluates: - Database schema compatibility with the target version. - Cloud SQL limitations and unsupported features. - Instance resource constraints (e.g., number of relations). - Compatibility of current database settings and extensions. - Overall instance health and readiness. This tool returns a long-running operation. Use the `get_operation` tool with the operation name returned by this call to poll its status. IMPORTANT: Once the operation status is DONE, the detailed precheck results are available within the `Operation` resource. You will need to inspect the response from `get_operation`. The findings are located in the `pre_check_major_version_upgrade_context.pre_check_response` field. The findings are structured, indicating: - INFO: General information. - WARNING: Potential issues that don't block the upgrade but should be reviewed. - ERROR: Critical issues that MUST be resolved before attempting the upgrade. Each finding should include a message and any required actions. Addressing any reported issues is crucial before proceeding with the major version upgrade. If `pre_check_response` is empty or missing, it indicates that no issues were identified during the precheck. Running this precheck does not impact the instance's availability.
    Connector
  • Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema. ExoQuery is a compile-time SQL query builder that translates Kotlin DSL expressions into SQL. WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data. INPUT REQUIREMENTS: - Complete Kotlin code (same requirements as validateExoquery) - SQL schema with CREATE TABLE and INSERT statements for test data - Data classes MUST exactly match the schema table structure - Column names in data classes must match schema (use @SerialName for snake_case columns) - Must include or or more .runSample() calls in main() to trigger SQL generation and execution (note that .runSample() is NOT or real production use, use .runOn(database) instead) OUTPUT FORMAT: Returns one or more JSON objects, each on its own line. Each object can be: 1. SQL with output (query executed successfully): {"sql": "SELECT u.name FROM \"User\" u", "output": "[(name=Alice), (name=Bob)]"} 2. Output only (e.g., print statements, intermediate results): {"output": "Before: [(id=1, title=Ion Blend Beans)]"} 3. Error output (runtime errors, exceptions): {"outputErr": "java.sql.SQLException: Table \"USERS\" not found"} Multiple results appear when code has multiple queries or print statements: {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25)]"} {"output": "Before:"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": "Rows affected: 1"} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25), (id=2, title=Luna Fuel Flask, unit_price=89.50, in_stock=6)]"} Compilation errors return the same format as validateExoquery: { "errors": { "File.kt": [ { "interval": {"start": {"line": 12, "ch": 10}, "end": {"line": 12, "ch": 15}}, "message": "Type mismatch: inferred type is String but Int was expected", "severity": "ERROR", "className": "ERROR" } ] } } Runtime Errors can have the following format: { "errors" : { "File.kt" : [ ] }, "exception" : { "message" : "[SQLITE_ERROR] SQL error or missing database (no such table: User)", "fullName" : "org.sqlite.SQLiteException", "stackTrace" : [ { "className" : "org.sqlite.core.DB", "methodName" : "newSQLException", "fileName" : "DB.java", "lineNumber" : 1179 }, ...] }, "text" : "<outStream><outputObject>\n{\"sql\": \"SELECT x.id, x.name, x.age FROM User x\"}\n</outputObject>\n</outStream>" } If there was a SQL query generated before the error, it will appear in the "text" field output stream. EXAMPLE INPUT CODE: ```kotlin import io.exoquery.* import kotlinx.serialization.Serializable import kotlinx.serialization.SerialName @Serializable data class User(val id: Int, val name: String, val age: Int) @Serializable data class Order(val id: Int, @SerialName("user_id") val userId: Int, val total: Int) val userOrders = sql.select { val u = from(Table<User>()) val o = join(Table<Order>()) { o -> o.userId == u.id } Triple(u.name, o.total, u.age) } fun main() = userOrders.buildPrettyFor.Sqlite().runSample() ``` EXAMPLE INPUT SCHEMA: ```sql CREATE TABLE "User" (id INT, name VARCHAR(100), age INT); CREATE TABLE "Order" (id INT, user_id INT, total INT); INSERT INTO "User" (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25); INSERT INTO "Order" (id, user_id, total) VALUES (1, 1, 100), (2, 1, 200), (3, 2, 150); ``` EXAMPLE SUCCESS OUTPUT: {"sql": "SELECT u.name AS first, o.total AS second, u.age AS third FROM \"User\" u INNER JOIN \"Order\" o ON o.user_id = u.id", "output": "[(first=Alice, second=100, third=30), (first=Alice, second=200, third=30), (first=Bob, second=150, third=25)]"} EXAMPLE WITH MULTIPLE OPERATIONS (insert with before/after check): {"output": "Before:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans)]"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": ""} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans), (id=2, title=Luna Fuel Flask)]"} EXAMPLE RUNTIME ERROR (if a user divided by zero): {"outputErr": "Exception in thread "main" java.lang.ArithmeticException: / by zero"} KEY PATTERNS: (See validateExoquery for complete pattern reference) Summary of most common patterns: - Filter: sql { Table<T>().filter { x -> x.field == value } } - Select: sql.select { val x = from(Table<T>()); where { ... }; x } - Join: sql.select { val a = from(Table<A>()); val b = join(Table<B>()) { b -> b.aId == a.id }; Pair(a, b) } - Left join: joinLeft(Table<T>()) { ... } returns nullable - Insert: sql { insert<T> { setParams(obj).excluding(id) } } - Update: sql { update<T>().set { it.field to value }.where { it.id == x } } - Delete: sql { delete<T>().where { it.id == x } } SCHEMA RULES: - Table names should match data class names (case-sensitive, use quotes for exact match) - Column names must match @SerialName values or property names - Include realistic test data to verify query logic - Sqlite database syntax (mostly compatible with standard SQL) COMMON PATTERNS: - JSON columns: Use VARCHAR for storage, @SqlJsonValue on the nested data class - Auto-increment IDs: Use INTEGER PRIMARY KEY - Nullable columns: Use Type? in Kotlin, allow NULL in schema
    Connector
  • Format and beautify SQL queries with proper indentation and keyword casing. Use when cleaning up inline SQL for code reviews, documentation, or debugging.
    Connector
  • Search the Wherobots documentation. ⚠️ WORKFLOW: This should be your FIRST tool call for any query task. Before writing queries, ALWAYS search documentation to understand: - Available spatial functions and their syntax - Best practices and common patterns - Example queries for similar use cases 📋 NEXT STEPS after this tool: 1. Use catalog tools (list_catalogs, list_databases, list_tables) to find data 2. Use describe_table_tool to understand table schemas 3. Use generate_spatial_query_tool to create SQL queries This tool searches the official Wherobots documentation to find relevant information about spatial functions, data formats, best practices, and more. It's useful when users need help understanding Wherobots features or syntax. Parameters ---------- query : str The search query string ctx : Context FastMCP context (injected automatically) page_size : int | None Optional number of results to return (default: 10) Returns ------- DocumentationSearchOutput A structured object containing documentation search results. - 'results': List of DocumentResult objects, each containing: - 'content': The documentation content snippet - 'path': The URL path to the documentation page - 'metadata': Additional metadata about the result Example Usage for LLM: - When user asks about Wherobots features, functions, or syntax - When generating queries and need context about spatial functions - Example User Queries and corresponding Tool Calls: - User: "How do I use ST_INTERSECTS in Wherobots?" - Tool Call: search_documentation("ST_INTERSECTS spatial function") - User: "What spatial functions are available for distance calculations?" - Tool Call: search_documentation("spatial distance functions") - User: "How do I connect to Wherobots from Python?" - Tool Call: search_documentation("Python connection API") For the best way to understand how to use spatial SQL in Wherobots, refer to the following Wherobots Documentation: - Writing Effective Spatial Queries in Wherobots: https://docs.wherobots.com/develop/write-spatial-queries - Writing Effective Spatial Queries in Wherobots complete example sub-section: https://docs.wherobots.com/develop/write-spatial-queries#complete-example:-counting-baseball-stadiums-in-the-us The complete example demonstrates: - Starting the `SedonaContext` - Using `wkls` where applicable - Loading spatial data from Wherobots tables - Using the correct schema from wherobots_open_data spatial catalogs - Writing spatial SQL queries effectively
    Connector
  • Execute a tool by name with the provided input parameters. If executing a given tool for the first time, it is recommended to call describe_tool_input first to understand the expected input schema.
    Connector
  • Search documentation using semantic or keyword search. Supports Tiger Cloud (TimescaleDB), PostgreSQL, and PostGIS.
    Connector
  • Access comprehensive ExoQuery documentation organized by topic and category. ExoQuery is a Language Integrated Query library for Kotlin Multiplatform that translates Kotlin DSL expressions into SQL at compile time. This resource provides access to the complete documentation covering all aspects of the library. AVAILABLE DOCUMENTATION CATEGORIES: 1. **Getting Started** - Introduction: What ExoQuery is and why it exists - Installation: Project setup and dependencies - Quick Start: First query in minutes 2. **Core Concepts** - SQL Blocks: The sql { } construct and query building - Parameters: Safe runtime data handling - Composing Queries: Functional query composition 3. **Query Operations** - Basic Operations: Map, filter, and transformations - Joins: Inner, left, and implicit joins - Grouping: GROUP BY and HAVING clauses - Sorting: ORDER BY operations - Subqueries: Correlated and nested queries - Window Functions: Advanced analytics 4. **Actions** - Insert: INSERT with returning and conflict handling - Update: UPDATE operations with setParams - Delete: DELETE with returning - Batch Operations: Bulk inserts and updates 5. **Advanced Features** - SQL Fragment Functions: Reusable SQL components with @SqlFragment - Dynamic Queries: Runtime query generation with @SqlDynamic - Free Blocks: Custom SQL and user-defined functions - Transactions: Transaction support patterns - Polymorphic Queries: Interfaces, sealed classes, higher-order functions - Local Variables: Variables within SQL blocks 6. **Data Handling** - Serialization: kotlinx.serialization integration - Custom Type Encoding: Custom encoders and decoders - JSON Columns: JSON and JSONB support (PostgreSQL) - Column Naming: @SerialName and @ExoEntity annotations - Nested Datatypes: Complex data structures - Kotlinx Integration: JSON and other serialization formats 7. **Schema-First Development** - Entity Generation: Compile-time code generation from database schema - AI-Enhanced Entities: Using LLMs to generate cleaner entity code 8. **Reference** - SQL Functions: Available string, math, and date functions - API Reference: Core types and function signatures HOW TO USE THIS RESOURCE: The resource URI follows the pattern: exoquery://docs/{file-path} Where {file-path} is the relative path from the docs root, e.g.: - exoquery://docs/01-getting-started/01-introduction.md - exoquery://docs/03-query-operations/02-joins.md - exoquery://docs/05-advanced-features/01-sql-fragments.md To discover available documents, use the MCP resources/list endpoint which will return all available documentation files with their titles, descriptions, and categories. Each document includes: - Title and description - Category classification - Complete markdown content with code examples - Cross-references to related topics WHEN TO USE: - User asks about ExoQuery syntax, features, or capabilities - User needs examples of specific query patterns - User encounters errors and needs to verify correct usage - User wants to understand advanced features or best practices
    Connector
  • Get the graph schema definition of a project. Returns the hierarchical schema with nodes (entities) and relationships. Graph schemas define entity hierarchies and typed relationships — a different format than relational flat-table schemas.
    Connector
  • Analyze Keycloak deployment configuration and identify potential issues or provide recommendations. Input deployment_info should include: - keycloak_version: version string - deployment_type: (standalone, clustered) - database: database type (postgresql, mysql, h2) - protocol: (http, https) - cache_type: (local, infinispan) - auth_providers: list of configured identity providers - any warnings or alerts Returns structured analysis and recommendations based on the deployment state.
    Connector
  • List all database users for a Cloud SQL instance.
    Connector
  • Retrieve detailed skills for TimescaleDB operations and best practices. ## Available Skills <available_skills> [8 ]{name description}: design-postgis-tables Comprehensive PostGIS spatial table design reference covering geometry types, coordinate systems, spatial indexing, and performance patterns for location-based applications design-postgres-tables "Use this skill for general PostgreSQL table design.\n\n**Trigger when user asks to:**\n- Design PostgreSQL tables, schemas, or data models when creating new tables and when modifying existing ones.\n- Choose data types, constraints, or indexes for PostgreSQL\n- Create user tables, order tables, reference tables, or JSONB schemas\n- Understand PostgreSQL best practices for normalization, constraints, or indexing\n- Design update-heavy, upsert-heavy, or OLTP-style tables\n\n\n**Keywords:** PostgreSQL schema, table design, data types, PRIMARY KEY, FOREIGN KEY, indexes, B-tree, GIN, JSONB, constraints, normalization, identity columns, partitioning, row-level security\n\nComprehensive reference covering data types, indexing strategies, constraints, JSONB patterns, partitioning, and PostgreSQL-specific best practices.\n" find-hypertable-candidates "Use this skill to analyze an existing PostgreSQL database and identify which tables should be converted to Timescale/TimescaleDB hypertables.\n\n**Trigger when user asks to:**\n- Analyze database tables for hypertable conversion potential\n- Identify time-series or event tables in an existing schema\n- Evaluate if a table would benefit from Timescale/TimescaleDB\n- Audit PostgreSQL tables for migration to Timescale/TimescaleDB/TigerData\n- Score or rank tables for hypertable candidacy\n\n\n**Keywords:** hypertable candidate, table analysis, migration assessment, Timescale, TimescaleDB, time-series detection, insert-heavy tables, event logs, audit tables\n\nProvides SQL queries to analyze table statistics, index patterns, and query patterns. Includes scoring criteria (8+ points = good candidate) and pattern recognition for IoT, events, transactions, and sequential data.\n" migrate-postgres-tables-to-hypertables "Use this skill to migrate identified PostgreSQL tables to Timescale/TimescaleDB hypertables with optimal configuration and validation.\n\n**Trigger when user asks to:**\n- Migrate or convert PostgreSQL tables to hypertables\n- Execute hypertable migration with minimal downtime\n- Plan blue-green migration for large tables\n- Validate hypertable migration success\n- Configure compression after migration\n\n**Prerequisites:** Tables already identified as candidates (use find-hypertable-candidates first if needed)\n\n**Keywords:** migrate to hypertable, convert table, Timescale, TimescaleDB, blue-green migration, in-place conversion, create_hypertable, migration validation, compression setup\n\nStep-by-step migration planning including: partition column selection, chunk interval calculation, PK/constraint handling, migration execution (in-place vs blue-green), and performance validation queries.\n" pgvector-semantic-search "Use this skill for setting up vector similarity search with pgvector for AI/ML embeddings, RAG applications, or semantic search.\n\n**Trigger when user asks to:**\n- Store or search vector embeddings in PostgreSQL\n- Set up semantic search, similarity search, or nearest neighbor search\n- Create HNSW or IVFFlat indexes for vectors\n- Implement RAG (Retrieval Augmented Generation) with PostgreSQL\n- Optimize pgvector performance, recall, or memory usage\n- Use binary quantization for large vector datasets\n\n**Keywords:** pgvector, embeddings, semantic search, vector similarity, HNSW, IVFFlat, halfvec, cosine distance, nearest neighbor, RAG, LLM, AI search\n\nCovers: halfvec storage, HNSW index configuration (m, ef_construction, ef_search), quantization strategies, filtered search, bulk loading, and performance tuning.\n" postgres "Use this skill for any PostgreSQL database work — table design, indexing, data types, constraints, extensions (pgvector, PostGIS, TimescaleDB), search, and migrations.\n\n**Trigger when user asks to:**\n- Design or modify PostgreSQL tables, schemas, or data models\n- Choose data types, constraints, indexes, or partitioning strategies\n- Work with pgvector embeddings, semantic search, or RAG\n- Set up full-text search, hybrid search, or BM25 ranking\n- Use PostGIS for spatial/geographic data\n- Set up TimescaleDB hypertables for time-series data\n- Migrate tables to hypertables or evaluate migration candidates\n\n**Keywords:** PostgreSQL, Postgres, SQL, schema, table design, indexes, constraints, pgvector, PostGIS, TimescaleDB, hypertable, semantic search, hybrid search, BM25, time-series\n" postgres-hybrid-text-search "Use this skill to implement hybrid search combining BM25 keyword search with semantic vector search using Reciprocal Rank Fusion (RRF).\n\n**Trigger when user asks to:**\n- Combine keyword and semantic search\n- Implement hybrid search or multi-modal retrieval\n- Use BM25/pg_textsearch with pgvector together\n- Implement RRF (Reciprocal Rank Fusion) for search\n- Build search that handles both exact terms and meaning\n\n\n**Keywords:** hybrid search, BM25, pg_textsearch, RRF, reciprocal rank fusion, keyword search, full-text search, reranking, cross-encoder\n\nCovers: pg_textsearch BM25 index setup, parallel query patterns, client-side RRF fusion (Python/TypeScript), weighting strategies, and optional ML reranking.\n" setup-timescaledb-hypertables "Use this skill when creating database schemas or tables for Timescale, TimescaleDB, TigerData, or Tiger Cloud, especially for time-series, IoT, metrics, events, or log data. Use this to improve the performance of any insert-heavy table.\n\n**Trigger when user asks to:**\n- Create or design SQL schemas/tables AND Timescale/TimescaleDB/TigerData/Tiger Cloud is available\n- Set up hypertables, compression, retention policies, or continuous aggregates\n- Configure partition columns, segment_by, order_by, or chunk intervals\n- Optimize time-series database performance or storage\n- Create tables for sensors, metrics, telemetry, events, or transaction logs\n\n**Keywords:** CREATE TABLE, hypertable, Timescale, TimescaleDB, time-series, IoT, metrics, sensor data, compression policy, continuous aggregates, columnstore, retention policy, chunk interval, segment_by, order_by\n\nStep-by-step instructions for hypertable creation, column selection, compression policies, retention, continuous aggregates, and indexes.\n" </available_skills>
    Connector
  • Returns the current Strale wallet balance. Call this before executing paid capabilities to verify sufficient funds, or after a series of calls to reconcile spend. Returns balance in EUR cents (integer) and formatted EUR string. Requires an API key — returns an auth instruction if none is configured.
    Connector
  • Search the regulatory corpus using keyword / trigram matching. Uses PostgreSQL trigram similarity on document titles and summaries. Returns documents ranked by relevance with summaries and classification tags. Prefer list_documents with filters (regulation, entity_type, source) first. Only use this for free-text keyword search when structured filters aren't sufficient. Args: query: Search terms (e.g. 'strong customer authentication', 'ICT risk', 'AML reporting'). per_page: Number of results (default 20, max 100).
    Connector
  • Update a project's schema (saves to database, does NOT deploy). ⚠️ CRITICAL: Follow ALL rules from create_project: • FLAT format (no 'fields' nesting) • string: MUST have max_length • decimal: MUST have precision + scale • Use "datetime" NOT "timestamp" • DON'T define: id, created_at, updated_at • NEVER create users/customers/employees tables (use app_users) ⚠️ MIGRATION RULES: • New fields MUST be "required": false OR have "default" value • Cannot add required field without default to existing tables • Safe: {new_field: {type: "string", max_length: 100, required: false}} WORKFLOW: 1. Use get_schema to see current schema 2. Modify following ALL rules 3. Call update_schema (saves only) 4. Call deploy_staging to apply changes 5. Monitor with get_job_status NOTE: This only saves the schema. You MUST call deploy_staging afterwards to apply changes.
    Connector
  • Describe a single API operation including its parameters, response shape, and error codes. WHEN TO USE: - Inspecting an endpoint's full contract before calling it. - Discovering which error codes an endpoint can return and how to recover. RETURNS: - operation: Full discovery record for the endpoint. - parameters: Raw OpenAPI parameter definitions. - request_body: Body schema (when applicable). - responses: Map of status code → description/schema. - linked_error_codes: Error catalog entries the endpoint can emit. EXAMPLE: Agent: "How do I call the screen audience endpoint?" describe_endpoint({ path: "/v1/data/screens/{screenId}/audience", method: "GET" })
    Connector
  • Search Cochrane systematic reviews via PubMed. Finds Cochrane Database of Systematic Reviews articles matching your query. Returns PubMed IDs, titles, and publication dates. Use get_review_detail with a PMID to get the full abstract. Args: query: Search terms for finding reviews (e.g. 'diabetes exercise', 'hypertension treatment', 'childhood vaccination safety'). limit: Maximum number of results to return (default 20, max 100).
    Connector
  • Get pre-built graph template schemas for common use cases. ⭐ USE THIS FIRST when creating a new graph project! Templates show the CORRECT graph schema format with: proper node definitions (description, flat_labels, schema with flat field definitions), relationship configurations (from, to, cardinality, data_schema), and hierarchical entity nesting. Available templates: Social Network (users, posts, follows), Knowledge Graph (topics, articles, authors), Product Catalog (products, categories, suppliers). You can use these templates directly with create_graph_project or modify them for your needs. TIP: Study these templates to understand the correct graph schema format before creating custom schemas.
    Connector
  • Creates a materialized view or stored procedure in the project's BigQuery data warehouse for data pre-aggregation. **When to use this tool:** - When the user needs to pre-aggregate data from multiple connectors (e.g., cross-channel marketing report) - When a query is too slow to run on-demand and benefits from materialization - When the user asks to "create a view", "save this as a table", "materialize this query" **Naming rules (enforced):** - Target dataset MUST be 'quanti_agg' (created automatically if it doesn't exist) - Object name MUST start with 'llm_' prefix (e.g., llm_weekly_spend) - Format: CREATE MATERIALIZED VIEW quanti_agg.llm_name AS SELECT ... **SQL format:** - CREATE MATERIALIZED VIEW: for pre-computed aggregation tables - CREATE OR REPLACE MATERIALIZED VIEW: to update an existing view - CREATE PROCEDURE: for complex multi-step transformations **Example:** CREATE MATERIALIZED VIEW quanti_agg.llm_weekly_channel_spend AS SELECT DATE_TRUNC(date, WEEK) as week, channel, SUM(spend) as total_spend FROM prod_google_ads_v2.campaign_stats GROUP BY 1, 2 **Limits:** Maximum 20 active aggregation views per project.
    Connector
  • Lists pre-configured reports (prebuilds) available for a connector. **What is a prebuild?** A prebuild is a standardized report maintained by Quanti for a given connector (e.g., Campaign Stats for Google Ads). It defines the BigQuery table structure (columns, types, metrics) and the associated API query. **When to use this tool:** - When the user asks "what reports are available for [connector]?" - When the user doesn't know which data or metrics exist for a connector - BEFORE get_schema_context, to explore available reports for a connector - To understand the data structure before writing SQL **Difference with get_schema_context:** - list_prebuilds → discover which reports/tables EXIST for a connector (catalog) - get_schema_context → get the actual BigQuery schema for the client project (effective data) **Response format:** Returns a JSON with for each prebuild: its ID, name, description, BigQuery table name, and the list of fields (name, type, description, is_metric). Fields marked is_metric=true are aggregatable metrics (impressions, clicks, cost...), others are dimensions (date, campaign_name...). **SKU examples**: googleads, meta, tiktok, tiktok-organic, amazon-ads, amazon-dsp, piano, shopify-v2, microsoftads, prestashop-api, mailchimp, kwanko
    Connector
  • Audit a technology stack for exploitable vulnerabilities. Accepts a comma-separated list of technologies (max 5) and searches for critical/ high severity CVEs with public exploits for each one, sorted by EPSS exploitation probability. Use this when a user describes their infrastructure and wants to know what to patch first. Example: technologies='nginx, postgresql, node.js' returns a risk-sorted list of exploitable CVEs grouped by technology. Rate-limit cost: each technology requires up to 2 API calls; 5 technologies counts as up to 10 calls toward your rate limit.
    Connector
  • List all Gmail labels for the authenticated user. Returns both system labels (INBOX, SENT, TRASH, etc.) and user-created labels with message/thread counts. Use this to discover label IDs needed for add_labels, remove_labels, or search_email queries.
    Connector
  • Browse and compare Licium's agents and tools. Use this when you want to SEE what's available before executing. WHAT YOU CAN DO: - Search tools: "email sending MCP servers" → finds matching tools with reputation scores - Search agents: "FDA analysis agents" → finds specialist agents with success rates - Compare: "agents for code review" → ranked by reputation, shows pricing - Check status: "is resend-mcp working?" → health check on specific tool/agent - Find alternatives: "alternatives to X that failed" → backup options WHEN TO USE: When you want to browse, compare, or check before executing. If you just want results, use licium instead.
    Connector
  • Get an overview of the Velvoite regulatory corpus. Returns document counts by source, regulation family, entity type, urgency distribution, obligation summary, and date range. Call this FIRST to orient yourself before running queries. No parameters needed.
    Connector
  • Create a new RationalBloks project from a JSON schema. ⚠️ CRITICAL RULES - READ BEFORE CREATING SCHEMA: 1. FLAT FORMAT (REQUIRED): ✅ CORRECT: {users: {email: {type: "string", max_length: 255}}} ❌ WRONG: {users: {fields: {email: {type: "string"}}}} DO NOT nest under 'fields' key! 2. FIELD TYPE REQUIREMENTS: • string: MUST have "max_length" (e.g., max_length: 255) • decimal: MUST have "precision" and "scale" (e.g., precision: 10, scale: 2) • datetime: Use "datetime" NOT "timestamp" • ALL fields: MUST have "type" property 3. AUTOMATIC FIELDS (DON'T define): • id (uuid, primary key) • created_at (datetime) • updated_at (datetime) 4. USER AUTHENTICATION: ❌ NEVER create "users", "customers", "employees" tables with email/password ✅ USE built-in app_users table Example: { "employee_profiles": { "user_id": {type: "uuid", foreign_key: "app_users.id", required: true}, "department": {type: "string", max_length: 100} } } 5. AUTHORIZATION: Add user_id → app_users.id to enable "only see your own data" Example: { "orders": { "user_id": {type: "uuid", foreign_key: "app_users.id"}, "total": {type: "decimal", precision: 10, scale: 2} } } 6. FIELD OPTIONS: • required: true/false • unique: true/false • default: any value • enum: ["val1", "val2"] • foreign_key: "table.id" AVAILABLE TYPES: string, text, integer, decimal, boolean, uuid, date, datetime, json, uuid_array, integer_array, text_array, float_array Array types store PostgreSQL native arrays with automatic GIN indexing: • uuid_array: UUID[] — for sets of references (e.g., tensor coordinates) • integer_array: BIGINT[] — for dimension indices, integer sets • text_array: TEXT[] — for tags, categories, label sets • float_array: DOUBLE PRECISION[] — for weight vectors, scores GIN-indexed operators: @> (contains), <@ (contained_by), && (overlaps) BACKEND ENGINE: • python (default): FastAPI backend — mature, full-featured • rust: Axum backend — faster cold starts, lower memory, high performance WORKFLOW: 1. Use get_template_schemas FIRST to see valid examples 2. Create schema following ALL rules above 3. Call this tool (optionally choose backend_type: "python" or "rust") 4. Monitor with get_job_status (2-5 min deployment) After creation, use get_job_status with returned job_id to monitor deployment.
    Connector