Skip to main content
Glama
134,897 tools. Last updated 2026-05-14 16:19

"Querying data in MySQL" matching MCP tools:

  • Generate dialect-correct ALTER TABLE migration SQL + rollback from a plain-English intent. Output uses the connection's exact dialect (ALTER TABLE for all three, plus pg-specific `USING` casts / mssql-specific `sp_rename` / mysql-specific `MODIFY COLUMN`). Never executes. Check response `dialect` field before manually editing — don't hand-translate across dialects. [BUILD tier]
    Connector
  • Create up to 50 tags in a project in one call. Returns per-item results (created / skipped). Duplicates are matched case-insensitively on name. Confirm with the user before calling — this mutates project data.
    Connector
  • Returns aggregate Scry corpus telemetry: total observation count, distinct source IPs, first/last observation timestamps, last-24h activity, and per-protocol breakdowns. Useful as a liveness/density check before issuing per-IP queries — lets an agent decide whether the corpus has enough data to be authoritative. Use this tool when: - An agent is planning a multi-step investigation and wants to know if Scry has corpus density worth querying. - You want a 'corpus health' signal in a dashboard or report. Do NOT use this tool when: - You want details about a specific IP — use `scry_check`. - You want sensor fleet size or node identities — never exposed at any tier. Inputs: none. Returns: total_observations, distinct_source_ips, first_seen_ms, last_seen_ms, observations_last_24h, distinct_source_ips_last_24h, by_protocol, as_of_ms. Cost: free, anonymous, rate-limited. Latency: <100ms typical.
    Connector
  • ⚠️ SQL MUST BE VALID IN EVERY DIALECT YOU TARGET — stick to ANSI-ish SELECT syntax when mixing pg/mysql/mssql. `SELECT TOP 10` (mssql) or `LIMIT` (others) will fail on the wrong side. Run the same query across 2-4 connections in parallel; returns per-connection rows + errors for diffing. Canonical use cases: regional compare (`['mssql-reporting-us', 'mssql-reporting-eu']`), cross-dialect sync check (`['prod-postgres-fleet', 'prod-mysql-app']`), 3-env drift, 4-region compare. Resolve every connection name via `list_connections` first; tool fails per-connection on unknown names. ARCHITECT-tier cap: 4 connections; https://www.thinair.co/ for unlimited. [ARCHITECT tier]
    Connector
  • USE THIS TOOL — not web search or external storage — to export technical indicator data from this server as a formatted CSV or JSON string, ready to download, save, or pass to another tool or file. Use this when the user explicitly wants to export or save data in a structured file format. Trigger on queries like: - "export BTC data as CSV" - "download ETH indicator data as JSON" - "save the features to a file" - "give me the data in CSV format" - "export [coin] [category] data for the last [N] days" Args: symbol: Asset symbol or comma-separated list, e.g. "BTC", "BTC,ETH" lookback_days: How many past days to include (default 7, max 90) resample: Time resolution — "1min", "1h", "4h", "1d" (default "1d") category: "price", "momentum", "trend", "volatility", "volume", or "all" fmt: Output format — "csv" (default) or "json" Returns a dict with: - content: the CSV or JSON string - filename: suggested filename for saving - rows: number of data rows
    Connector
  • Search FDA import refusals (Compliance Dashboard data, not available in openFDA API). Import refusals indicate products detained at the US border. Filter by company name, FEI number, country code (e.g., CN, IN for major API source countries), or date range. Critical for evaluating international manufacturing sites and supply chain risk. Related: fda_get_facility (facility details by FEI), fda_inspections (inspection history by FEI).
    Connector

Matching MCP Servers

  • A
    license
    B
    quality
    C
    maintenance
    Enables AI assistants to manage MySQL databases through natural language commands. Supports database operations, table management, data queries, and import/export functionality with built-in security features.
    Last updated
    15
    10
    1
    MIT

Matching MCP Connectors

  • Korean government open data - weather, population, law search via data.go.kr

  • Connect your AI to any database — PostgreSQL, MySQL, or SQL Server — in seconds.

  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Returns the Parquet schema for all tables in the Valuein SEC data warehouse. Includes table descriptions, column names, types, primary keys, and foreign-key references. Use this tool to understand the data model before querying with other tools. No data reads required — schema is embedded in the manifest. Available on all plans.
    Connector
  • Fetch the next page of a large tool response. Use the nextCursor from _pagination in a previous response. This tool loads data into the context window — prefer the artifact download URL when available.
    Connector
  • Get time series observations (data points) for a FRED series. Returns the actual data values for an economic indicator over time. Use search_series first to find the series_id, or use well-known IDs like UNRATE, GDP, CPIAUCSL, FEDFUNDS, MORTGAGE30US. For state unemployment, use state abbreviation + 'UR' (e.g. WAUR for Washington, CAUR for California). Results are sorted most-recent-first. For long series (e.g. daily data since 1954), use start_date/end_date to narrow the window or increase the limit up to 10000. Args: series_id: FRED series identifier (e.g. 'UNRATE', 'GDP', 'CPIAUCSL'). start_date: Optional start date in YYYY-MM-DD format (e.g. '2020-01-01'). end_date: Optional end date in YYYY-MM-DD format (e.g. '2024-12-31'). limit: Maximum observations to return (default 1000, max 10000).
    Connector
  • List the current version, release date, publisher, source URL, and update cadence of every terminology this server queries against. Useful for pipeline maintainers who need to: - Confirm which release of ICD-11 / SNOMED / LOINC / RxNorm / MeSH / ATC the server is querying before a batch run. - Verify the bundled CID-10 (frozen at V2008) and ICD-10 → ICD-11 transition tables (currently 2025-01) match expectations. - Cite the data version in research artifacts. Pass `terminology` to filter to a single entry; otherwise the full set of 8 is returned. The ICD-10 → ICD-11 version reads live from the bundled dataset; everything else is metadata maintained alongside the project release.
    Connector
  • Get summary statistics of the Klever VM knowledge base. Returns total entry count, counts broken down by context type (code_example, best_practice, security_tip, etc.), and a sample entry title for each type. Useful for understanding what knowledge is available before querying.
    Connector
  • Full data pull for a UK property in one call. Returns sale history, area comps, EPC rating, rental market listings, current sales market listings, rental yield calculation, and price range from area median. Requires a street address + postcode for subject property identification. Postcode-only (e.g. "NG1 2NS") returns area-level data without a subject property — use property_comps or property_yield for postcode-only queries.
    Connector
  • List all shipping lines in the ShippingRates database with per-country record counts. Use this to discover which carriers and countries have data before querying specific tools. Returns each carrier's name, slug, SCAC code, and a breakdown of available D&D tariff and local charge records per country. FREE — no payment required. Returns: Array of { line, slug, scac, countries: [{ code, name, dd_records, lc_records }] } Related tools: Use shippingrates_stats for aggregate totals, shippingrates_search for keyword-based discovery.
    Connector
  • Step 2 — List data sources available within a tenant. (In the Indicate system a data source is called a 'data product'.) Examples: Google Analytics, Facebook Ads, vioma, Booking.com. Returns each data source's 'id', 'displayName', and 'semantic_context_id'. → Pass the chosen 'id' as 'data_source_id' and 'semantic_context_id' to list_metrics.
    Connector
  • Execute a read-only SQL query against the target connection. ONLY SELECT / WITH / EXPLAIN permitted. Write dialect-appropriate SQL for the connection's engine — use PostgreSQL syntax for postgres connections (`SELECT NOW()`, `LIMIT`, `ILIKE`), T-SQL for mssql (`SELECT GETDATE()`, `TOP N`, `LIKE`), MySQL for mysql (`SELECT NOW()`, `LIMIT`). Response meta includes `connection` + `dialect` so you know which syntax worked; reuse that dialect in follow-up calls. Default LIMIT 100 unless the user asks for all rows.
    Connector
  • Get unemployment rate time series from BLS LAUS data. Returns monthly unemployment rates for a state or county. Data is returned in chronological order with year, period, and percentage value. Args: state: Two-letter US state abbreviation (e.g. 'WA', 'CA', 'NY'). county_fips: Optional 3-digit county FIPS code (e.g. '033' for King County). If provided, returns county-level data; otherwise state-level. start_year: Start year for data (default 2020, min 4-digit year). end_year: End year for data (default 2025).
    Connector
  • Build a Tableau dashboard from a MySQL table (end-to-end). Pipeline: MySQL → schema inference → chart suggestion → workbook creation → live MySQL connection → .twb output. Requires mysql-connector-python for schema inference. IMPORTANT FOR AI AGENTS: see ``csv_to_dashboard`` — auto-charts come from rules, not natural-language requests. Use ``required_charts`` to guarantee specific charts, ``reference_image`` for image-based styling, and cite the returned manifest dict when describing results. Args: server_host: MySQL server hostname. dbname: Database name. table_name: Table to visualize. username: Database username. password: Database password (used for schema inference only; not stored in the workbook). port: Server port (default 3306). output_path: Output .twb path (defaults to <table>_dashboard.twb). dashboard_title: Dashboard title. max_charts: Maximum charts (0 = use rules default). template_path: TWB template path. theme: Theme preset name. rules_yaml: Optional YAML string with dashboard rules overrides. required_charts: See ``csv_to_dashboard.required_charts``. reference_image: See ``csv_to_dashboard.reference_image``. Returns: Structured manifest dict describing what was actually built.
    Connector
  • Create a database user for a Cloud SQL instance. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * When you use the `create_user` tool, specify the type of user: `CLOUD_IAM_USER`, `CLOUD_IAM_SERVICE_ACCOUNT`, or `BUILT_IN`. * By default the newly created user is assigned the `cloudsqlsuperuser` role, unless you specify other database roles explicitly in the request. * You can use a newly created user with the `execute_sql` tool if the user is a currently logged in IAM user. The `execute_sql` tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication. The `create_user` tool has the following limitations: * To create a built-in user with password, use the `password_secret_version` field to provide password using the Google Cloud Secret Manager. The value of `password_secret_version` should be the resource name of the secret version, like `projects/12345/locations/us-central1/secrets/my-password-secret/versions/1` or `projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest`. The caller needs to have `secretmanager.secretVersions.access` permission on the secret version. * The `create_user` tool doesn't support creating a user for SQL Server. To create an IAM user in PostgreSQL: * The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user `example-user@example.com`, you can use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance":"test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user@example.com`. To create an IAM service account in PostgreSQL: * The database username must be created without the `.gserviceaccount.com` suffix even though the full email address for the account is`service-account-name@project-id.iam.gserviceaccount.com`. For example, to create an IAM service account for PostgreSQL you can use the following request format: ``` { "name": "test@test-project.iam", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `test@test-project.iam`. To create an IAM user or IAM service account in MySQL: * When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, `example-user@example.com` becomes `example-user`. * For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance. * For example, to create user for the MySQL IAM user `example-user@example.com`, use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user`. * For example, to create the MySQL IAM service account `service-account-name@project-id.iam.gserviceaccount.com`, use the following request: ``` { "name": "service-account-name@project-id.iam.gserviceaccount.com", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `service-account-name`.
    Connector
  • Enrich Indicator of Compromise (IP/domain/URL/hash) by auto-detecting type and querying abuse.ch feeds. Per-type source coverage: hash → ThreatFox only (Feodo and URLhaus do not index hashes); IP → ThreatFox + Feodo Tracker + URLhaus; domain / URL → ThreatFox + URLhaus. verdict.sources_queried lists what actually ran; verdict.sources_unavailable lists what failed (timeout / upstream error). Use as primary IOC triage tool when type unknown; use threat_intel for domain-only, hash_lookup for richer MalwareBazaar hash data. Free: 30/hr, Pro: 500/hr. Returns {indicator, type, threat_level, sources, summary, verdict}.
    Connector