Skip to main content
Glama
133,676 tools. Last updated 2026-05-13 04:11

"Reading data from MySQL database" matching MCP tools:

  • Generate dialect-correct ALTER TABLE migration SQL + rollback from a plain-English intent. Output uses the connection's exact dialect (ALTER TABLE for all three, plus pg-specific `USING` casts / mssql-specific `sp_rename` / mysql-specific `MODIFY COLUMN`). Never executes. Check response `dialect` field before manually editing — don't hand-translate across dialects. [BUILD tier]
    Connector
  • ⚠️ SQL MUST BE VALID IN EVERY DIALECT YOU TARGET — stick to ANSI-ish SELECT syntax when mixing pg/mysql/mssql. `SELECT TOP 10` (mssql) or `LIMIT` (others) will fail on the wrong side. Run the same query across 2-4 connections in parallel; returns per-connection rows + errors for diffing. Canonical use cases: regional compare (`['mssql-reporting-us', 'mssql-reporting-eu']`), cross-dialect sync check (`['prod-postgres-fleet', 'prod-mysql-app']`), 3-env drift, 4-region compare. Resolve every connection name via `list_connections` first; tool fails per-connection on unknown names. ARCHITECT-tier cap: 4 connections; https://www.thinair.co/ for unlimited. [ARCHITECT tier]
    Connector
  • DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. # delete_file ## When to use DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. ## Parameters to validate before calling - file_token (string, required) — The file token (UUID) of the file to delete. Get via fetch_files. ## Notes - DESTRUCTIVE — IRREVERSIBLE. Always confirm with the user before calling. Explain what will be lost.
    Connector
  • Install an app template on a VPS/Cloud site. Starts a background installation. Poll get_app_status() for progress. Requires: API key with write scope. VPS or Cloud plan only. Args: slug: Site identifier template: App template slug. Available: django, laravel, nextjs, nodejs, nuxtjs, rails, static, forge app_name: Short name for the app (2-50 chars, lowercase alphanumeric + hyphens). Used as subdomain: {app_name}.{site_domain} db_type: Database type. "none", "mysql", or "postgresql" (depends on template) domain: Custom domain override (default: {app_name}.{site_domain}) display_name: Human-friendly name (default: derived from app_name) Returns: {"id": "uuid", "app_name": "forge", "status": "installing", "message": "Installation started. Poll for progress."} Errors: FORBIDDEN: Plan does not support apps (shared plans) VALIDATION_ERROR: Invalid template, app_name, or duplicate name
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Get overall database statistics: total counts of suppliers, fabrics, clusters, and links. USE WHEN user asks: - "how big is your database" / "what's the coverage" / "data overview" - "how many suppliers / fabrics / clusters do you have" - "database size / scale / freshness" - "is the data up to date" - "live counts for MRC data" - "first-time onboarding: 'what can MRC data do for me'" - "数据库多大 / 有多少数据 / 覆盖多少供应商" - "你们的数据规模 / 数据量 / 新鲜度" WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in. DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards). RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution } EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({}) ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static. NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai). 中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。
    Connector

Matching MCP Servers

  • A
    license
    A
    quality
    -
    maintenance
    Connect and interact with MySQL databases seamlessly. Execute SQL queries, manage database connections, and retrieve data directly through AI assistants. Enhance your AI capabilities with structured access to your MySQL data.
    Last updated
    9
    1
    18

Matching MCP Connectors

  • Connect your AI to any database — PostgreSQL, MySQL, or SQL Server — in seconds.

  • Korean government open data - weather, population, law search via data.go.kr

  • Return the full dossier projection for a meeting reading, in the requested cognitive lens. Same lens enum and default as describe_place / describe_corridor — eight total projections (seven stakeholder lenses — developer, investor, broker, attorney, business, resident, civic-leader — plus synthesis as the default). Returns the lens-projected body, full frontmatter (jurisdiction, board, meeting_date, document_type, key_signals, vote tallies), citation-stable claims[] (per the Phase 11 Citable Contract; populates as meeting claim scopes graduate), four-clock freshness, and the structured record_status block (record_type / meeting_status / outcome_status / minutes_available / vote_final) — the last prevents agents from summarizing agenda intent as completed action. Use to ground citations in a specific meeting's reading; pair with list_meetings or meeting_index for discovery.
    Connector
  • Decode raw EVM revert data from a failed transaction or mezo_call on Mezo. Handles Error(string) reverts, Panic(uint256) assertions, custom Solidity errors (requires ABI), and silent reverts. Pure computation — no RPC call needed. Pass the hex revert data from a transaction receipt or eth_call error response.
    Connector
  • Create a database user for a Cloud SQL instance. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * When you use the `create_user` tool, specify the type of user: `CLOUD_IAM_USER`, `CLOUD_IAM_SERVICE_ACCOUNT`, or `BUILT_IN`. * By default the newly created user is assigned the `cloudsqlsuperuser` role, unless you specify other database roles explicitly in the request. * You can use a newly created user with the `execute_sql` tool if the user is a currently logged in IAM user. The `execute_sql` tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication. The `create_user` tool has the following limitations: * To create a built-in user with password, use the `password_secret_version` field to provide password using the Google Cloud Secret Manager. The value of `password_secret_version` should be the resource name of the secret version, like `projects/12345/locations/us-central1/secrets/my-password-secret/versions/1` or `projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest`. The caller needs to have `secretmanager.secretVersions.access` permission on the secret version. * The `create_user` tool doesn't support creating a user for SQL Server. To create an IAM user in PostgreSQL: * The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user `example-user@example.com`, you can use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance":"test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user@example.com`. To create an IAM service account in PostgreSQL: * The database username must be created without the `.gserviceaccount.com` suffix even though the full email address for the account is`service-account-name@project-id.iam.gserviceaccount.com`. For example, to create an IAM service account for PostgreSQL you can use the following request format: ``` { "name": "test@test-project.iam", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `test@test-project.iam`. To create an IAM user or IAM service account in MySQL: * When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, `example-user@example.com` becomes `example-user`. * For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance. * For example, to create user for the MySQL IAM user `example-user@example.com`, use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user`. * For example, to create the MySQL IAM service account `service-account-name@project-id.iam.gserviceaccount.com`, use the following request: ``` { "name": "service-account-name@project-id.iam.gserviceaccount.com", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `service-account-name`.
    Connector
  • Execute a read-only SQL query against the target connection. ONLY SELECT / WITH / EXPLAIN permitted. Write dialect-appropriate SQL for the connection's engine — use PostgreSQL syntax for postgres connections (`SELECT NOW()`, `LIMIT`, `ILIKE`), T-SQL for mssql (`SELECT GETDATE()`, `TOP N`, `LIKE`), MySQL for mysql (`SELECT NOW()`, `LIMIT`). Response meta includes `connection` + `dialect` so you know which syntax worked; reuse that dialect in follow-up calls. Default LIMIT 100 unless the user asks for all rows.
    Connector
  • Get plain-language explanations of active predictive signals. Each narrative explains the mechanism behind a signal — why the predictor leads the target, what economic logic connects them, and what the current reading implies. Designed for non-quantitative users who want to understand the 'why' behind each signal without reading F-statistics. Returns trigger context, predictor value, direction, and a narrative paragraph suitable for reports and briefings.
    Connector
  • Execute a SQL query on a site's database. Supports SELECT, INSERT, UPDATE, DELETE, and DDL statements. Results are limited to 1000 rows for SELECT queries. Requires: API key with write scope. Args: slug: Site identifier database: Database name query: SQL query string Returns: {"columns": ["id", "title"], "rows": [[1, "Hello"], ...], "affected_rows": 0, "query_time_ms": 12}
    Connector
  • Build a Tableau dashboard from a MySQL table (end-to-end). Pipeline: MySQL → schema inference → chart suggestion → workbook creation → live MySQL connection → .twb output. Requires mysql-connector-python for schema inference. IMPORTANT FOR AI AGENTS: see ``csv_to_dashboard`` — auto-charts come from rules, not natural-language requests. Use ``required_charts`` to guarantee specific charts, ``reference_image`` for image-based styling, and cite the returned manifest dict when describing results. Args: server_host: MySQL server hostname. dbname: Database name. table_name: Table to visualize. username: Database username. password: Database password (used for schema inference only; not stored in the workbook). port: Server port (default 3306). output_path: Output .twb path (defaults to <table>_dashboard.twb). dashboard_title: Dashboard title. max_charts: Maximum charts (0 = use rules default). template_path: TWB template path. theme: Theme preset name. rules_yaml: Optional YAML string with dashboard rules overrides. required_charts: See ``csv_to_dashboard.required_charts``. reference_image: See ``csv_to_dashboard.reference_image``. Returns: Structured manifest dict describing what was actually built.
    Connector
  • Get WordPress database information (size, tables, row counts). Requires: API key with read scope. WordPress sites only. Args: slug: Site identifier Returns: {"database": "wp_mysite", "size_mb": 45.2, "tables": 12, "total_rows": 15432}
    Connector
  • Execute raw, client-provided SQL queries against an ephemeral database initialized with the provided schema. Returns query results in a simple JSON format with column headers and row data as a 2D array. The database type (SQLite or Postgres) is specified via the databaseType parameter: - SQLITE: In-memory, lightweight, uses standard SQLite syntax - POSTGRES: Temporary isolated schema with dedicated user, uses PostgreSQL syntax and features WHEN TO USE: When you need to run your own hand-written SQL queries to test database behavior or compare the output with ExoQuery results from validateAndRunExoquery. This lets you verify that ExoQuery-generated SQL produces the same results as your expected SQL. INPUT REQUIREMENTS: - query: A valid SQL query (SELECT, INSERT, UPDATE, DELETE, etc.) - schema: SQL schema with CREATE TABLE and INSERT statements to initialize the test database - databaseType: Either "SQLITE" or "POSTGRES" (defaults to SQLITE if not specified) OUTPUT FORMAT: On success, returns JSON with the SQL query and a 2D array of results: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} Output format details: - First array element contains column headers - Subsequent array elements contain row data - All values are returned as strings On error, returns JSON with error message and the attempted query (if available): {"error":"Query execution failed: no such table: USERS","sql":"SELECT * FROM USERS"} Or if schema initialization fails: {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users ...\\n--------","sql":"CREAT TABLE users ..."} EXAMPLE INPUT: Query: SELECT * FROM users ORDER BY id Schema: CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER ); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30); INSERT INTO users (id, name, age) VALUES (2, 'Bob', 25); INSERT INTO users (id, name, age) VALUES (3, 'Charlie', 35); EXAMPLE SUCCESS OUTPUT: {"sql":"SELECT * FROM users ORDER BY id","output":[["id","name","age"],["1","Alice","30"],["2","Bob","25"],["3","Charlie","35"]]} EXAMPLE ERROR OUTPUT (bad table name): {"error":"Query execution failed: no such table: invalid_table","sql":"SELECT * FROM invalid_table"} EXAMPLE ERROR OUTPUT (bad schema): {"error":"Database initialization failed due to: near \"CREAT\": syntax error\\nWhen executing the following statement:\\n--------\\nCREAT TABLE users (id INTEGER)\\n--------\\nCheck that the initialization SQL is valid and compatible with SQLite.","sql":"CREAT TABLE users (id INTEGER)"} COMMON QUERY EXAMPLES: Select all rows: SELECT * FROM users Select specific columns with filtering: SELECT name, age FROM users WHERE age > 25 Aggregate functions: SELECT COUNT(*) as total FROM users Join queries: SELECT u.name, o.total FROM users u JOIN orders o ON u.id = o.user_id Insert data: INSERT INTO users (name, age) VALUES ('David', 40) Update data: UPDATE users SET age = 31 WHERE name = 'Alice' Delete data: DELETE FROM users WHERE age < 25 Count with grouping: SELECT age, COUNT(*) as count FROM users GROUP BY age SCHEMA RULES: - Use standard SQLite syntax - Table names are case-sensitive (use lowercase for simplicity or quote names) - Include INSERT statements to populate test data for meaningful results - Supported data types: INTEGER, TEXT, REAL, BLOB, NULL - Use INTEGER PRIMARY KEY for auto-increment columns - Schema SQL is split on semicolons (;), so each statement after a ';' is executed separately - Avoid semicolons in comments as they will cause statement parsing issues COMPARISON WITH EXOQUERY: This tool is designed to work alongside validateAndRunExoquery for comparison purposes: 1. Use validateAndRunExoquery to run ExoQuery Kotlin code and see the generated SQL + results 2. Use runRawSql with your own hand-written SQL to verify you get the same output 3. Compare the outputs to ensure ExoQuery generates the SQL you expect 4. Test edge cases with plain SQL before writing equivalent ExoQuery code
    Connector
  • Run a complete readability + structure analysis on a piece of writing in one call. Returns Flesch Reading Ease, Flesch–Kincaid Grade, Gunning Fog Index, SMOG, Coleman–Liau, and ARI in a single result, plus word/sentence/paragraph counts, average sentence length, complex-word percentage, reading time, target audience label, and human-readable warnings. Use this whenever an agent has just generated or edited prose and needs to check whether it lands at the right reading level. One call replaces 4–6 separate readability lookups.
    Connector
  • List every database connection registered for your tenant: name, id, dbType (postgres / mysql / mssql), createdAt. Flags duplicate names — only the first-added connection of a duplicate name is reachable by name. Returns nothing sensitive (no DSN, no credentials).
    Connector
  • Permanently delete a stored memory by its UUID. This is a hard delete for GDPR right-to-erasure compliance. The memory is removed from both the vector store and the database. This action cannot be undone.
    Connector
  • Get plain-language explanations of active predictive signals. Each narrative explains the mechanism behind a signal — why the predictor leads the target, what economic logic connects them, and what the current reading implies. Designed for non-quantitative users who want to understand the 'why' behind each signal without reading F-statistics. Returns trigger context, predictor value, direction, and a narrative paragraph suitable for reports and briefings.
    Connector
  • Get aggregate benchmark statistics from ClimateUX's database of 500+ audited websites. Includes average CO2 per page view, average sustainability score, green hosting rate. No API key required. Data source: ClimateUX (climateux.net).
    Connector