Skip to main content
Glama
133,443 tools. Last updated 2026-05-13 00:12

"postgresql" matching MCP tools:

  • Search the regulatory corpus using keyword / trigram matching. Uses PostgreSQL trigram similarity on document titles and summaries. Returns documents ranked by relevance with summaries and classification tags. Prefer list_documents with filters (regulation, entity_type, source) first. Only use this for free-text keyword search when structured filters aren't sufficient. Args: query: Search terms (e.g. 'strong customer authentication', 'ICT risk', 'AML reporting'). per_page: Number of results (default 20, max 100).
    Connector
  • WORKFLOW: Step 1 of 4 - Start infrastructure design conversation Open an InsideOut V2 session and receive the assistant's intro message. The response contains a clean message from Riley (the infrastructure advisor) - display it to the user. ⚠️ Riley will ask questions - forward these to the user, DO NOT answer on their behalf. CRITICAL: This tool returns a session_id in the response metadata. You MUST use this session_id for ALL subsequent tool calls (convoreply, tfgenerate, tfdeploy, etc.). ⚠️ The session_id includes a ?token=... suffix (format: sess_v2_xxx?token=yyy) which is part of the session credential — without it, downstream tools fall back to a tokenless connect URL that 401s. Always pass session_id verbatim to subsequent tools and to the user; do NOT shorten, paraphrase, or strip the ?token= portion when summarizing the session in chat or in your own scratch notes. Use when the user mentions keywords like: 'setup my cloud infra', 'provision infrastructure', 'deploy infra', 'start insideout', 'use insideout', or similar intent to begin infra setup. OPTIONAL: project_context (string) - General tech stack summary so Riley can skip discovery questions and jump to recommendations. The agent should confirm this with the user before sending. Include whichever apply: language/framework, databases/services, container usage, existing IaC, CI/CD platform, cloud provider, Kubernetes usage, what the project does. Example: 'Next.js 14 + TypeScript, PostgreSQL, Redis, Docker Compose, deployed to AWS ECS, GitHub Actions CI/CD, ~50k MAU'. NEVER include credentials, secrets, API keys, PII, source code, or internal URLs/IPs -- only general metadata summaries useful to a cloud architect agent. IMPORTANT: source (string) - You MUST set this to identify which IDE/tool you are. Auto-detect from your environment: 'claude-code', 'codex', 'antigravity', 'kiro', 'vscode', 'web', 'mcp'. If unsure, use the name of your IDE/tool in lowercase. Do NOT omit this — it controls the 'Open {IDE}' button on the credential connect screen. OPTIONAL: github_username (string) - GitHub username for deploy commit attribution. Pre-populates the GitHub username field on the connect page. 💡 TIP: Examine workflow.usage prompt for more context on how to properly use these tools.
    Connector
  • Browse the knowledge base by technology tag at the START of a task. Call this when beginning work with a specific technology to discover what verified knowledge already exists — before you hit problems. Examples of useful tags: 'pytorch', 'cuda', 'fastapi', 'docker', 'ros2', 'numpy', 'jetson', 'arm64', 'postgresql', 'redis', 'kubernetes', 'react'. Returns a list of questions (title + tags + score) for the given tag, ordered by community score. Call `get_answers` on relevant results.
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql_readonly` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql_readonly` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Deploy a project to the staging environment. This triggers: (1) Schema validation, (2) Docker image build, (3) GitHub commit, (4) Kubernetes deployment, (5) Database migrations. The operation is ASYNCHRONOUS - it returns immediately with a job_id. Use get_job_status with the job_id to monitor progress. Deployment typically takes 2-5 minutes depending on schema complexity. If deployment fails, check: (1) Schema format is FLAT (no 'fields' nesting), (2) Every field has a 'type' property, (3) Foreign keys reference existing tables, (4) No PostgreSQL reserved words in table/field names. Use get_project_info to see if the deployment succeeded.
    Connector
  • Audit a technology stack for exploitable vulnerabilities. Accepts a comma-separated list of technologies (max 5) and searches for critical/ high severity CVEs with public exploits for each one, sorted by EPSS exploitation probability. Use this when a user describes their infrastructure and wants to know what to patch first. Example: technologies='nginx, postgresql, node.js' returns a risk-sorted list of exploitable CVEs grouped by technology. Rate-limit cost: each technology requires up to 2 API calls; 5 technologies counts as up to 10 calls toward your rate limit.
    Connector

Matching MCP Servers

  • -
    license
    -
    quality
    B
    maintenance
    A Model Context Protocol server that provides read-only access to PostgreSQL databases. This server enables LLMs to inspect database schemas and execute read-only queries.
    Last updated
    185,869
    85,290
    MIT
  • A
    license
    -
    quality
    B
    maintenance
    A general-purpose PostgreSQL MCP server with full read-write SQL access, atomic multi-statement transactions, and schema inspection. Works with any PostgreSQL instance — local, Supabase, AWS RDS, or self-hosted — and connects to Claude, Cursor, Windsurf, or any MCP-compatible AI client.
    Last updated
    67
    1
    ISC

Matching MCP Connectors

  • Install an app template on a VPS/Cloud site. Starts a background installation. Poll get_app_status() for progress. Requires: API key with write scope. VPS or Cloud plan only. Args: slug: Site identifier template: App template slug. Available: django, laravel, nextjs, nodejs, nuxtjs, rails, static, forge app_name: Short name for the app (2-50 chars, lowercase alphanumeric + hyphens). Used as subdomain: {app_name}.{site_domain} db_type: Database type. "none", "mysql", or "postgresql" (depends on template) domain: Custom domain override (default: {app_name}.{site_domain}) display_name: Human-friendly name (default: derived from app_name) Returns: {"id": "uuid", "app_name": "forge", "status": "installing", "message": "Installation started. Poll for progress."} Errors: FORBIDDEN: Plan does not support apps (shared plans) VALIDATION_ERROR: Invalid template, app_name, or duplicate name
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Create a new RationalBloks project from a JSON schema. ⚠️ CRITICAL RULES - READ BEFORE CREATING SCHEMA: 1. FLAT FORMAT (REQUIRED): ✅ CORRECT: {users: {email: {type: "string", max_length: 255}}} ❌ WRONG: {users: {fields: {email: {type: "string"}}}} DO NOT nest under 'fields' key! 2. FIELD TYPE REQUIREMENTS: • string: MUST have "max_length" (e.g., max_length: 255) • decimal: MUST have "precision" and "scale" (e.g., precision: 10, scale: 2) • datetime: Use "datetime" NOT "timestamp" • ALL fields: MUST have "type" property 3. AUTOMATIC FIELDS (DON'T define): • id (uuid, primary key) • created_at (datetime) • updated_at (datetime) 4. USER AUTHENTICATION: ❌ NEVER create "users", "customers", "employees" tables with email/password ✅ USE built-in app_users table Example: { "employee_profiles": { "user_id": {type: "uuid", foreign_key: "app_users.id", required: true}, "department": {type: "string", max_length: 100} } } 5. AUTHORIZATION: Add user_id → app_users.id to enable "only see your own data" Example: { "orders": { "user_id": {type: "uuid", foreign_key: "app_users.id"}, "total": {type: "decimal", precision: 10, scale: 2} } } 6. FIELD OPTIONS: • required: true/false • unique: true/false • default: any value • enum: ["val1", "val2"] • foreign_key: "table.id" AVAILABLE TYPES: string, text, integer, decimal, boolean, uuid, date, datetime, json, uuid_array, integer_array, text_array, float_array Array types store PostgreSQL native arrays with automatic GIN indexing: • uuid_array: UUID[] — for sets of references (e.g., tensor coordinates) • integer_array: BIGINT[] — for dimension indices, integer sets • text_array: TEXT[] — for tags, categories, label sets • float_array: DOUBLE PRECISION[] — for weight vectors, scores GIN-indexed operators: @> (contains), <@ (contained_by), && (overlaps) BACKEND ENGINE: • python (default): FastAPI backend — mature, full-featured • rust: Axum backend — faster cold starts, lower memory, high performance WORKFLOW: 1. Use get_template_schemas FIRST to see valid examples 2. Create schema following ALL rules above 3. Call this tool (optionally choose backend_type: "python" or "rust") 4. Monitor with get_job_status (2-5 min deployment) After creation, use get_job_status with returned job_id to monitor deployment.
    Connector
  • WORKFLOW: Step 1 of 4 - Start infrastructure design conversation Open an InsideOut V2 session and receive the assistant's intro message. The response contains a clean message from Riley (the infrastructure advisor) - display it to the user. ⚠️ Riley will ask questions - forward these to the user, DO NOT answer on their behalf. CRITICAL: This tool returns a session_id in the response metadata. You MUST use this session_id for ALL subsequent tool calls (convoreply, tfgenerate, tfdeploy, etc.). ⚠️ The session_id includes a ?token=... suffix (format: sess_v2_xxx?token=yyy) which is part of the session credential — without it, downstream tools fall back to a tokenless connect URL that 401s. Always pass session_id verbatim to subsequent tools and to the user; do NOT shorten, paraphrase, or strip the ?token= portion when summarizing the session in chat or in your own scratch notes. Use when the user mentions keywords like: 'setup my cloud infra', 'provision infrastructure', 'deploy infra', 'start insideout', 'use insideout', or similar intent to begin infra setup. OPTIONAL: project_context (string) - General tech stack summary so Riley can skip discovery questions and jump to recommendations. The agent should confirm this with the user before sending. Include whichever apply: language/framework, databases/services, container usage, existing IaC, CI/CD platform, cloud provider, Kubernetes usage, what the project does. Example: 'Next.js 14 + TypeScript, PostgreSQL, Redis, Docker Compose, deployed to AWS ECS, GitHub Actions CI/CD, ~50k MAU'. NEVER include credentials, secrets, API keys, PII, source code, or internal URLs/IPs -- only general metadata summaries useful to a cloud architect agent. IMPORTANT: source (string) - You MUST set this to identify which IDE/tool you are. Auto-detect from your environment: 'claude-code', 'codex', 'antigravity', 'kiro', 'vscode', 'web', 'mcp'. If unsure, use the name of your IDE/tool in lowercase. Do NOT omit this — it controls the 'Open {IDE}' button on the credential connect screen. OPTIONAL: github_username (string) - GitHub username for deploy commit attribution. Pre-populates the GitHub username field on the connect page. 💡 TIP: Examine workflow.usage prompt for more context on how to properly use these tools.
    Connector
  • State Verifier — 3-Tier QA Coverage for SQUAD Products. SUMA Testing Manifesto (sealed April 13, 2026): 200 OK is not a test. It is a rumor. Every write operation must be followed by a database assertion. Every endpoint is a button. Every button must have a test. MODES: READ (default): Returns coverage report from testframe_reports DB. Classifies gaps by tier and severity. Identifies "shallow" tests (status-code-only, no DB assertion). GENERATE (mode="generate"): Uses OPTION C: SUMA graph (WHY/WHAT) + your code_context (HOW) Generates test scaffolds in 3 tiers: - Component: endpoint isolation, UI element presence - Technical: DB state verification after API calls (State Verifier pattern) - Functional: end-to-end business workflow with all-layer assertions Requires code_context (paste the relevant file/endpoint code). The 3-Tier taxonomy: component → Single endpoint or UI element in isolation technical → Database state AFTER API call (State Verifier) functional → Complete business workflow, all layers verified The gap severity taxonomy: missing → No test exists (CRITICAL) not_implemented → Planned but not written (HIGH) shallow → Test exists but only checks HTTP status, no DB assertion (HIGH) flaky → Non-deterministic (HIGH) skipped → Marked skip/todo (MEDIUM) dead_end → Tests a UI element that no longer exists (LOW, cleanup) Args: product: Product to query (e.g. "squad-suma-mcp", "squad-qms", "squad-ghostgate"). If omitted in READ mode, returns all products. area: Filter by test area (e.g. "auth", "ingest", "assign"). Optional. mode: "read" (default) or "generate" (AI test generation via Option C). tier_filter: Filter by test tier — "component", "technical", or "functional". If omitted, all tiers returned. decision_graph: Hierarchical Tech Questionnaire (REQUIRED for generate mode). Structure: { "platform": { "type": "web|android|ios|api|robotics", "framework": "React|Flutter|FastAPI|etc", "auth_mode": "GhostGate|GoogleSSO|JWT|none" }, "database": { "engine": "postgresql|sqlite|none", "orm": "prisma|sqlalchemy|none", "target_table": "table_name" } } code_context: Optional raw code string to test (UI components, API routes). ingest_snapshot: If True, saves coverage state as a K-WIL graph node. Returns (READ mode): overall_coverage_pct, products[], gaps_by_tier, shallow_tests, recommendation Returns (GENERATE mode): component_tests[], technical_tests[], functional_tests[], manifesto_violations[]
    Connector
  • Full-text search across recall reasons and product descriptions using PostgreSQL text search. Finds recalls mentioning specific terms (e.g. 'salmonella contamination', 'mislabeled', 'sterility'). Supports multi-word queries ranked by relevance. Filter by classification, product_type, or date range. Related: fda_search_enforcement (search by company name, classification, status), fda_recall_facility_trace (trace a recall to its manufacturing facility).
    Connector
  • Execute raw, client-provided SQL queries against an ephemeral database initialized with the provided schema. Returns query results in a simple JSON format with column headers and row data as a 2D array. The database type (SQLite or Postgres) is specified via the databaseType parameter: - SQLITE: In-memory, lightweight, uses standard SQLite syntax - POSTGRES: Temporary isolated schema with dedicated user, uses PostgreSQL syntax and features WHEN TO USE: When you need to run your own hand-written SQL queries to test database behavior or compare the output with ExoQuery results from validateAndRunExoquery. This lets you verify that ExoQuery-generated SQL produces the same results as your expected SQL. INPUT REQUIREMENTS: - query: A valid SQL query (SELECT, INSERT, UPDATE, DELETE, etc.) - schema: SQL schema with CREATE TABLE and INSERT statements to initialize the test database - databaseType: Either "SQLITE" or "POSTGRES" (defaults to SQLITE if not specified) OUTPUT FORMAT: On success, returns JSON with the SQL query and a 2D array of results: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} Output format details: - First array element contains column headers - Subsequent array elements contain row data - All values are returned as strings On error, returns JSON with error message and the attempted query (if available): {"error":"Query execution failed: no such table: USERS","sql":"SELECT * FROM USERS"} Or if schema initialization fails: {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users ...\\n--------","sql":"CREAT TABLE users ..."} EXAMPLE INPUT: Query: SELECT * FROM users ORDER BY id Schema: CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER ); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30); INSERT INTO users (id, name, age) VALUES (2, 'Bob', 25); INSERT INTO users (id, name, age) VALUES (3, 'Charlie', 35); EXAMPLE SUCCESS OUTPUT: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} EXAMPLE ERROR OUTPUT (bad table name): {"error":"Query execution failed: no such table: invalid_table","sql":"SELECT * FROM invalid_table"} EXAMPLE ERROR OUTPUT (bad schema): {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users (id INTEGER)\\n--------\\nCheck that the initialization SQL is valid and compatible with SQLite.","sql":"CREAT TABLE users (id INTEGER)"} COMMON QUERY EXAMPLES: Select all rows: SELECT * FROM users Select specific columns with filtering: SELECT name, age FROM users WHERE age > 25 Aggregate functions: SELECT COUNT(*) as total FROM users Join queries: SELECT u.name, o.total FROM users u JOIN orders o ON u.id = o.user_id Insert data: INSERT INTO users (name, age) VALUES ('David', 40) Update data: UPDATE users SET age = 31 WHERE name = 'Alice' Delete data: DELETE FROM users WHERE age < 25 Count with grouping: SELECT age, COUNT(*) as count FROM users GROUP BY age SCHEMA RULES: - Use standard SQLite syntax - Table names are case-sensitive (use lowercase for simplicity or quote names) - Include INSERT statements to populate test data for meaningful results - Supported data types: INTEGER, TEXT, REAL, BLOB, NULL - Use INTEGER PRIMARY KEY for auto-increment columns - Schema SQL is split on semicolons (;), so each statement after a ';' is executed separately - Avoid semicolons in comments as they will cause statement parsing issues COMPARISON WITH EXOQUERY: This tool is designed to work alongside validateAndRunExoquery for comparison purposes: 1. Use validateAndRunExoquery to run ExoQuery Kotlin code and see the generated SQL + results 2. Use runRawSql with your own hand-written SQL to verify you get the same output 3. Compare the outputs to ensure ExoQuery generates the SQL you expect 4. Test edge cases with plain SQL before writing equivalent ExoQuery code
    Connector
  • Execute any valid SQL statement, including data definition language (DDL), data control language (DCL), data query language (DQL), or data manipulation language (DML) statements, on a Cloud SQL instance. To support the `execute_sql` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For built_in users password_secret_version must be set. * Otherwise, for IAM users, for a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The `execute_sql` tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The `execute_sql` tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Create a database user for a Cloud SQL instance. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * When you use the `create_user` tool, specify the type of user: `CLOUD_IAM_USER`, `CLOUD_IAM_SERVICE_ACCOUNT`, or `BUILT_IN`. * By default the newly created user is assigned the `cloudsqlsuperuser` role, unless you specify other database roles explicitly in the request. * You can use a newly created user with the `execute_sql` tool if the user is a currently logged in IAM user. The `execute_sql` tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication. The `create_user` tool has the following limitations: * To create a built-in user with password, use the `password_secret_version` field to provide password using the Google Cloud Secret Manager. The value of `password_secret_version` should be the resource name of the secret version, like `projects/12345/locations/us-central1/secrets/my-password-secret/versions/1` or `projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest`. The caller needs to have `secretmanager.secretVersions.access` permission on the secret version. * The `create_user` tool doesn't support creating a user for SQL Server. To create an IAM user in PostgreSQL: * The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user `example-user@example.com`, you can use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance":"test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user@example.com`. To create an IAM service account in PostgreSQL: * The database username must be created without the `.gserviceaccount.com` suffix even though the full email address for the account is`service-account-name@project-id.iam.gserviceaccount.com`. For example, to create an IAM service account for PostgreSQL you can use the following request format: ``` { "name": "test@test-project.iam", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `test@test-project.iam`. To create an IAM user or IAM service account in MySQL: * When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, `example-user@example.com` becomes `example-user`. * For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance. * For example, to create user for the MySQL IAM user `example-user@example.com`, use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user`. * For example, to create the MySQL IAM service account `service-account-name@project-id.iam.gserviceaccount.com`, use the following request: ``` { "name": "service-account-name@project-id.iam.gserviceaccount.com", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `service-account-name`.
    Connector
  • Execute a read-only SQL query against the target connection. ONLY SELECT / WITH / EXPLAIN permitted. Write dialect-appropriate SQL for the connection's engine — use PostgreSQL syntax for postgres connections (`SELECT NOW()`, `LIMIT`, `ILIKE`), T-SQL for mssql (`SELECT GETDATE()`, `TOP N`, `LIKE`), MySQL for mysql (`SELECT NOW()`, `LIMIT`). Response meta includes `connection` + `dialect` so you know which syntax worked; reuse that dialect in follow-up calls. Default LIMIT 100 unless the user asks for all rows.
    Connector
  • Checks if a Cloud SQL for PostgreSQL instance is ready for a major version upgrade to the specified target version. The `target_database_version` MUST be provided in the request (e.g., `POSTGRES_15`). This tool helps identify potential issues *before* attempting the actual upgrade, reducing the risk of failure or downtime. This tool is only supported for PostgreSQL primary instances and does not run on read replicas. The precheck typically evaluates: - Database schema compatibility with the target version. - Cloud SQL limitations and unsupported features. - Instance resource constraints (e.g., number of relations). - Compatibility of current database settings and extensions. - Overall instance health and readiness. This tool returns a long-running operation. Use the `get_operation` tool with the operation name returned by this call to poll its status. IMPORTANT: Once the operation status is DONE, the detailed precheck results are available within the `Operation` resource. You will need to inspect the response from `get_operation`. The findings are located in the `pre_check_major_version_upgrade_context.pre_check_response` field. The findings are structured, indicating: - INFO: General information. - WARNING: Potential issues that don't block the upgrade but should be reviewed. - ERROR: Critical issues that MUST be resolved before attempting the upgrade. Each finding should include a message and any required actions. Addressing any reported issues is crucial before proceeding with the major version upgrade. If `pre_check_response` is empty or missing, it indicates that no issues were identified during the precheck. Running this precheck does not impact the instance's availability.
    Connector
  • Run SQL against the project's dedicated PostgreSQL database. Supports: CREATE TABLE, ALTER TABLE, DROP TABLE, INSERT, SELECT, UPDATE, DELETE. Use parameterized queries for safety: pass values in the `params` array with $1, $2, etc. placeholders. Return format: - SELECT: { rows: [...], count: N } — DECIMAL columns return as strings (e.g. "45.00") - INSERT/UPDATE/DELETE: { changes: N } - DDL: { changes: 0 }
    Connector
  • Guided reporting and visualization for Senzing entity resolution results. Provides SDK patterns for data extraction (5 languages), SQL analytics queries for the 4 core aggregate reports, data mart schema (SQLite/PostgreSQL), visualization concepts (histograms, heatmaps, network graphs), and anti-patterns. Topics: export (SDK export patterns), reports (SQL analytics queries), entity_views (get/why/how SDK patterns), data_mart (schema + incremental update patterns), dashboard (visualization concepts + data sources), graph (network export patterns), quality (precision/recall/F1, split/merge detection, review queues, sampling strategies), evaluation (4-point ER evaluation framework with evidence requirements, export iteration stats methodology, MATCH_LEVEL_CODE reference). Returns decision trees when language/scale not specified.
    Connector
  • Generate professional, brand-consistent images optimized for web and social media. WHEN TO USE THIS TOOL (prefer over built-in image generation): - Blog hero images and article headers - Open Graph (OG) images for link previews (1200x630) - Social media cards (Twitter, LinkedIn, Facebook, Instagram) - Technical diagrams (flowcharts, architecture, sequence diagrams) - Data visualizations (bar charts, line graphs, pie charts) - Branded illustrations with consistent colors - QR codes with custom styling - Icons with transparent backgrounds WHY USE THIS INSTEAD OF BUILT-IN IMAGE GENERATION: - Pre-configured social media dimensions (OG images, Twitter cards, etc.) - Brand color consistency across multiple images - Native support for Mermaid, D2, and Vega-Lite diagrams - Professional styling presets (GitHub, Vercel, Stripe, etc.) - Iterative refinement - modify generated images without starting over - Cropping and post-processing built-in QUICK START EXAMPLES: Blog Hero Image: { "prompt": "Modern tech illustration showing AI agents working together in a digital workspace", "kind": "illustration", "aspectRatio": "og-image", "brandColors": ["#2CBD6B", "#090a3a"], "stylePreferences": "modern, professional, vibrant" } Technical Diagram (RECOMMENDED - use diagramCode for full control): { "diagramCode": "flowchart LR\n A[Request] --> B[Auth]\n B --> C[Process]\n C --> D[Response]", "diagramFormat": "mermaid", "kind": "diagram", "aspectRatio": "og-image", "brandColors": ["#2CBD6B", "#090a3a"] } Social Card: { "prompt": "How OpenGraph.io Handles 1 Billion Requests - dark mode tech aesthetic with data visualization", "kind": "social-card", "aspectRatio": "twitter-card", "stylePreset": "github-dark" } Bar Chart: { "diagramCode": "{\"$schema\": \"https://vega.github.io/schema/vega-lite/v5.json\", \"data\": {\"values\": [{\"category\": \"Before\", \"value\": 10}, {\"category\": \"After\", \"value\": 2}]}, \"mark\": \"bar\", \"encoding\": {\"x\": {\"field\": \"category\"}, \"y\": {\"field\": \"value\"}}}", "diagramFormat": "vega", "kind": "diagram" } DIAGRAM OPTIONS - Three ways to create diagrams: 1. **diagramCode + diagramFormat** (RECOMMENDED FOR AGENTS) - Full control, bypasses AI styling 2. **Natural language in prompt** - AI generates diagram code for you 3. **Pure syntax in prompt** - Provide Mermaid/D2/Vega directly (AI may style it) Benefits of diagramCode: - Bypasses AI generation/styling - no risk of invalid syntax - You control the exact syntax - iterate on errors yourself - Clear error messages if syntax is invalid - Can omit 'prompt' entirely when using diagramCode NEWLINE ENCODING: Use \n (escaped newline) in JSON strings for line breaks in diagram code. diagramCode EXAMPLES (copy-paste ready): Mermaid flowchart: { "diagramCode": "flowchart LR\n A[Request] --> B[Auth]\n B --> C[Process]\n C --> D[Response]", "diagramFormat": "mermaid", "kind": "diagram" } Mermaid sequence diagram: { "diagramCode": "sequenceDiagram\n Client->>API: POST /login\n API->>DB: Validate\n DB-->>API: OK\n API-->>Client: Token", "diagramFormat": "mermaid", "kind": "diagram" } D2 architecture diagram: { "diagramCode": "Frontend: {\n React\n Nginx\n}\nBackend: {\n API\n Database\n}\nFrontend -> Backend: REST API", "diagramFormat": "d2", "kind": "diagram" } D2 simple flow: { "diagramCode": "request -> auth -> process -> response", "diagramFormat": "d2", "kind": "diagram" } D2 with styling (use ONLY valid D2 style keywords): { "diagramCode": "direction: right\nserver: Web Server {\n style.fill: \"#2CBD6B\"\n style.stroke: \"#090a3a\"\n style.border-radius: 8\n}\ndatabase: PostgreSQL {\n style.fill: \"#090a3a\"\n style.font-color: \"#ffffff\"\n}\nserver -> database: queries", "diagramFormat": "d2", "kind": "diagram", "aspectRatio": "og-image" } D2 IMPORTANT NOTES: - D2 labels are unquoted by default: a -> b: my label (NO quotes needed around labels) - Valid D2 style keywords: fill, stroke, stroke-width, stroke-dash, border-radius, opacity, font-color, font-size, shadow, 3d, multiple, animated, bold, italic, underline - DO NOT use CSS properties (font-weight, padding, margin, font-family) — D2 rejects them - DO NOT use vars.* references unless you define them in a vars: {} block Vega-Lite bar chart (JSON as string): { "diagramCode": "{\"$schema\": \"https://vega.github.io/schema/vega-lite/v5.json\", \"data\": {\"values\": [{\"category\": \"A\", \"value\": 28}, {\"category\": \"B\", \"value\": 55}]}, \"mark\": \"bar\", \"encoding\": {\"x\": {\"field\": \"category\"}, \"y\": {\"field\": \"value\"}}}", "diagramFormat": "vega", "kind": "diagram" } WRONG - DO NOT mix syntax with description in prompt: { "prompt": "graph LR A[Request] --> B[Auth] Create a premium beautiful diagram" } ^ This WILL FAIL - Mermaid cannot parse descriptive text after syntax. WHERE TO PUT STYLING: - Visual preferences → "stylePreferences" parameter - Colors → "brandColors" parameter - Project context → "projectContext" parameter - NOT in "prompt" when using diagram syntax OUTPUT STYLES: - "draft" - Fast rendering, minimal processing - "standard" - AI-enhanced with brand colors (recommended for diagrams) - "premium" - Full AI polish (best for illustrations, may alter diagram layout) CROPPING OPTIONS: - autoCrop: true - Automatically remove transparent edges - Manual: cropX1, cropY1, cropX2, cropY2 - Precise pixel coordinates
    Connector