Skip to main content
Glama
127,390 tools. Last updated 2026-05-05 15:15

"A tool that can connect to MySQL database, understand table structure and query related data" matching MCP tools:

  • Submit a list of URLs to be checked. Returns a job_id that can be polled via get_job_status or fetched via get_job_results. For up to ~200 URLs this tool waits for completion (up to 60 seconds) and returns the results directly; for larger jobs it returns early with job_id and the agent should poll.
    Connector
  • Generate dialect-correct ALTER TABLE migration SQL + rollback from a plain-English intent. Output uses the connection's exact dialect (ALTER TABLE for all three, plus pg-specific `USING` casts / mssql-specific `sp_rename` / mysql-specific `MODIFY COLUMN`). Never executes. Check response `dialect` field before manually editing — don't hand-translate across dialects. [BUILD tier]
    Connector
  • Execute a SQL query on Baselight and wait for results (up to 1 minute). The query executes and returns the first 100 rows upon completion, or info about a pending query that needs more time. Use DuckDB syntax only, table format "@username.dataset.table" (double-quoted), SELECT queries only (no DDL/DML), no semicolon terminators, use LIMIT not TOP. If query is still PENDING, use `sdk-get-results` to continue polling. If totalResults > returned rows, use `sdk-get-results` with offset to paginate.
    Connector
  • Revoke the current internal API key immediately. Side effect: existing internal-tool access that depends on that key can stop working. Requires a valid signature session and `mcp-session-id`. Use when rotating credentials or responding to key exposure; call `tronsave_generate_api_key` afterwards if continued internal access is needed. Operation is effectively destructive for the old key and may fail for unauthorized sessions.
    Connector
  • Lists pre-configured reports (prebuilds) available for a connector. **What is a prebuild?** A prebuild is a standardized report maintained by Quanti for a given connector (e.g., Campaign Stats for Google Ads). It defines the BigQuery table structure (columns, types, metrics) and the associated API query. **When to use this tool:** - When the user asks "what reports are available for [connector]?" - When the user doesn't know which data or metrics exist for a connector - BEFORE get_schema_context, to explore available reports for a connector - To understand the data structure before writing SQL **Difference with get_schema_context:** - list_prebuilds → discover which reports/tables EXIST for a connector (catalog) - get_schema_context → get the actual BigQuery schema for the client project (effective data) **Response format:** Returns a JSON with for each prebuild: its ID, name, description, BigQuery table name, and the list of fields (name, type, description, is_metric). Fields marked is_metric=true are aggregatable metrics (impressions, clicks, cost...), others are dimensions (date, campaign_name...). **SKU examples**: googleads, meta, tiktok, tiktok-organic, amazon-ads, amazon-dsp, piano, shopify-v2, microsoftads, prestashop-api, mailchimp, kwanko
    Connector
  • Validate structured data and automatically compute repairs if it fails. Single call that combines validate + repair. If PASS: returns the validated data with determinism hash. If FAIL: returns the failure details AND a repaired payload with field-by-field corrections and confidence scores. The agent can inspect the repairs and resubmit the corrected data. If REVIEW: returns the flagged data with review reasoning. This is the recommended starting point for most agent integrations. Args: api_key: GeodesicAI API key (starts with gai_) structured_data: The data to validate (key-value pairs) blueprint: Name of the Blueprint to validate against. Use list_blueprints to see options.
    Connector

Matching MCP Servers

  • A
    license
    B
    quality
    D
    maintenance
    A Model Context Protocol server that provides read-only MySQL database queries for AI assistants, allowing them to execute queries, explore database structures, and investigate data directly from AI-powered tools.
    Last updated
    3
    86
    13
    MIT
  • A
    license
    -
    quality
    C
    maintenance
    A versatile tool that enables querying and exporting data from multiple relational databases (MySQL, PostgreSQL, Oracle, SQLite, etc.) in read-only mode for data safety.
    Last updated
    10
    Apache 2.0

Matching MCP Connectors

  • send-that-email MCP — wraps StupidAPIs (requires X-API-Key)

  • Free dofollow backlinks for Canadian businesses. Claim, verify, and track NFC tap analytics.

  • General search tool. This is your FIRST entry point to look up for possible tokens, entities, and addresses related to a query. Do NOT use this tool for prediction markets. For Polymarket names, topics, event slugs, or URLs, use `prediction_market_lookup` instead. Nansen MCP does not support NFTs, however check using this tool if the query relates to a token. Regular tokens and NFTs can have the same name. This tool allows you to: - Check if a (fungible) token exists by name, symbol, or contract address - Search information about a token - Current price in USD - Trading volume - Contract address and chain information - Market cap and supply data when available - Search information about an entity - Find Nansen labels of an address (EOA) or resolve a domain (.eth, .sol) Args: query: The search term - token symbol, name, or address. DO NOT include chain name here! result_type: Type filter - "token", "entity", "eoa", or "any" max_results: Maximum number of results (default: 25, max: 25) chain: Optional chain filter to narrow down token results to specific blockchain. If not further specified, leave it as None. If a chain is specified, ALWAYS use this parameter instead of adding chain name to the query string. Valid values: "ethereum", "solana", "base", "bnb", "polygon", "arbitrum", "avalanche", "optimism", etc. How to choose result_type: - token: Use when searching for a token by name, symbol, or contract address. **CRITICAL**: Use the `chain` parameter to filter by blockchain, NOT the query string! ✅ CORRECT: query="AAVE", chain="base" ❌ WRONG: query="AAVE base", chain=None - entity: Use when you want entity info by name/label (exchanges, funds, etc.) - eoa: Use when you have an address and need its labels or you have an ENS/SNS domain and need the resolved address - any: Mixed results (tokens/entities). Also auto-resolves ENS/SNS domains and, if token/entity results are empty and the query looks like an address, falls back to EOA labels. Important: - This is the only tool that can resolve domains. If you start from a domain, pass the Resolved Address to other tools. - **DOMAINS**: Strings ending in `.eth` (ENS) or `.sol` (SNS) are DOMAIN NAMES, not tokens. Use result_type="eoa" or "any" to resolve them. Examples: "vitalik.eth", "abracadabra.sol", "y22.eth" are domains that resolve to addresses. - **DO NOT** ASSUME that token is a NFT, always verify the name by using this tool first. - **DO NOT** add keyword `token` or chain names to the query string, unless this is explicitly in the token name or symbol! - **Focus** on **popular chains** like ethereum, solana, base and bnb when no chain is specified and the same token is deployed on multiple chains. - **If a chain is specified**, use the `chain` parameter to filter tokens by blockchain instead of including chain in query. - **DO NOT** rely on this endpoint for LATEST prices as this is delayed. Use `token_ohlcv` for latest prices.
    Connector
  • Connect to the user's catalogue using a pairing code. IMPORTANT: Most users connect via OAuth (sign-in popup) — if get_profile already works, the user is connected and you do NOT need this tool. Only use this tool when: (1) get_profile returns an authentication error, AND (2) the user shares a code matching the pattern WORD-1234 (e.g., TULIP-3657). Never proactively ask for a pairing code — try get_profile first. If the user does share a code, call this tool immediately without asking for confirmation. Never say "pairing code" to the user — just say "your code" or refer to it naturally.
    Connector
  • Returns available evaluation tools, what they check, and their pricing. Call this first to understand what Axcess can evaluate and how much each evaluation costs. This tool is FREE. All evaluation tools require USDC payment on Base network. Returns: JSON with tool descriptions, pricing, and rubric categories.
    Connector
  • General search tool. This is your FIRST entry point to look up for possible tokens, entities, and addresses related to a query. Do NOT use this tool for prediction markets. For Polymarket names, topics, event slugs, or URLs, use `prediction_market_lookup` instead. Nansen MCP does not support NFTs, however check using this tool if the query relates to a token. Regular tokens and NFTs can have the same name. This tool allows you to: - Check if a (fungible) token exists by name, symbol, or contract address - Search information about a token - Current price in USD - Trading volume - Contract address and chain information - Market cap and supply data when available - Search information about an entity - Find Nansen labels of an address (EOA) or resolve a domain (.eth, .sol) Args: query: The search term - token symbol, name, or address. DO NOT include chain name here! result_type: Type filter - "token", "entity", "eoa", or "any" max_results: Maximum number of results (default: 25, max: 25) chain: Optional chain filter to narrow down token results to specific blockchain. If not further specified, leave it as None. If a chain is specified, ALWAYS use this parameter instead of adding chain name to the query string. Valid values: "ethereum", "solana", "base", "bnb", "polygon", "arbitrum", "avalanche", "optimism", etc. How to choose result_type: - token: Use when searching for a token by name, symbol, or contract address. **CRITICAL**: Use the `chain` parameter to filter by blockchain, NOT the query string! ✅ CORRECT: query="AAVE", chain="base" ❌ WRONG: query="AAVE base", chain=None - entity: Use when you want entity info by name/label (exchanges, funds, etc.) - eoa: Use when you have an address and need its labels or you have an ENS/SNS domain and need the resolved address - any: Mixed results (tokens/entities). Also auto-resolves ENS/SNS domains and, if token/entity results are empty and the query looks like an address, falls back to EOA labels. Important: - This is the only tool that can resolve domains. If you start from a domain, pass the Resolved Address to other tools. - **DOMAINS**: Strings ending in `.eth` (ENS) or `.sol` (SNS) are DOMAIN NAMES, not tokens. Use result_type="eoa" or "any" to resolve them. Examples: "vitalik.eth", "abracadabra.sol", "y22.eth" are domains that resolve to addresses. - **DO NOT** ASSUME that token is a NFT, always verify the name by using this tool first. - **DO NOT** add keyword `token` or chain names to the query string, unless this is explicitly in the token name or symbol! - **Focus** on **popular chains** like ethereum, solana, base and bnb when no chain is specified and the same token is deployed on multiple chains. - **If a chain is specified**, use the `chain` parameter to filter tokens by blockchain instead of including chain in query. - **DO NOT** rely on this endpoint for LATEST prices as this is delayed. Use `token_ohlcv` for latest prices.
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `read_only_execute_sql` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • List themes available to the authenticated user. Returns theme IDs and names that can be passed to generate_presentation.
    Connector
  • DEFAULT search — find works by name, title, or any descriptive query. Handles partial matches and title variations. TRIGGER: Any mention of a work by name ("the blue painting," "Self-Portrait"), or finding something ("where's that piece I did last year"). Use this to resolve work_ids before calling get_work, update_work, get_upload_url, or any tool needing a work_id. For structured filters (status, date, medium), use search_works instead. YOU (the connected AI) translate the query. Pass the user's natural language as `query` (for title/medium text search) and optionally set structured filters you can infer: status, date_start, date_end, medium, artwork_type, series_name, current_location_type, sort_by, sort_direction. Examples: "sold paintings from the 90s" → query: "painting", status: "sold", date_start: 1990, date_end: 1999. "the blue one" → query: "blue". "Self-Portrait" → query: "Self-Portrait".
    Connector
  • Returns the Parquet schema for all tables in the Valuein SEC data warehouse. Includes table descriptions, column names, types, primary keys, and foreign-key references. Use this tool to understand the data model before querying with other tools. No data reads required — schema is embedded in the manifest. Available on all plans.
    Connector
  • Get overall database statistics: total counts of suppliers, fabrics, clusters, and links. USE WHEN user asks: - "how big is your database" / "what's the coverage" / "data overview" - "how many suppliers / fabrics / clusters do you have" - "database size / scale / freshness" - "is the data up to date" - "live counts for MRC data" - "first-time onboarding: 'what can MRC data do for me'" - "数据库多大 / 有多少数据 / 覆盖多少供应商" - "你们的数据规模 / 数据量 / 新鲜度" WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in. DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards). RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution } EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({}) ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static. NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai). 中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Lists perspectives — either browsing one workspace or searching by title across every workspace the user can access. Items include perspective_id, title, status, conversation count, and workspace info. Behavior: - Read-only. - Browse mode (workspace_id, no query): lists every perspective in that workspace. - Search mode (query): matches against the perspective title across accessible workspaces. Optional workspace_id narrows the search. Query must be non-empty and ≤200 chars. - Errors with "Please provide workspace_id to list perspectives or query to search." if neither is given. - Pass nextCursor back as cursor; has_more indicates further results. When to use this tool: - Resolving a perspective_id from a name the user mentioned (search mode). - Browsing a workspace's perspectives to pick or summarize. When NOT to use this tool: - Inspecting one known perspective in detail — use perspective_get. - Aggregate counts or rates — use perspective_get_stats. - Fetching conversation data — use perspective_list_conversations or perspective_get_conversations. Examples: - List all in a workspace: `{ workspace_id: "ws_..." }` - Search by name across all workspaces: `{ query: "welcome" }` - Search within a workspace: `{ query: "welcome", workspace_id: "ws_..." }`
    Connector
  • Revoke the current internal API key immediately. Side effect: existing internal-tool access that depends on that key can stop working. Requires a valid signature session and `mcp-session-id`. Use when rotating credentials or responding to key exposure; call `tronsave_generate_api_key` afterwards if continued internal access is needed. Operation is effectively destructive for the old key and may fail for unauthorized sessions.
    Connector
  • Run a read-only SQL query against the database. Call list_tables() and describe_table() first to see available tables and columns. SELECT only, 5s timeout, 1000 row limit, JSON results. Examples: query("SELECT full_name, stars FROM ai_repos ORDER BY stars DESC LIMIT 10") query("SELECT domain, COUNT(*) FROM ai_repos GROUP BY domain ORDER BY 2 DESC")
    Connector