Skip to main content
Glama
133,443 tools. Last updated 2026-05-13 00:12

"database" matching MCP tools:

  • Rollback a project to a previous version. ⚠️ WARNING: This reverts schema AND code to the specified commit. Database data is NOT rolled back. Use get_version_history to find the commit SHA of the version you want to rollback to. After rollback, use get_job_status to monitor the redeployment. Rollback is useful when a schema change breaks deployment.
    Connector
  • Use only after explicit user confirmation and a prior prepare_publish result to publish or schedule a Dreamlit workflow. Side effect: installs live database/repeating/auth triggers, schedules or sends broadcasts, and may enable notification delivery; sandbox mode can hold notifications for inspection. Returns the published workflow status and app URLs. Do not call speculatively or without carrying forward the prepare_publish safety fields.
    Connector
  • DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. # delete_file ## When to use DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. ## Parameters to validate before calling - file_token (string, required) — The file token (UUID) of the file to delete. Get via fetch_files. ## Notes - DESTRUCTIVE — IRREVERSIBLE. Always confirm with the user before calling. Explain what will be lost.
    Connector
  • Read-only. Use to find workflows in a project by name, description, or trigger type before inspection or editing. Trigger filters include database, auth email, repeating, broadcast, and no-trigger workflows. Returns paginated workflow summaries, published/sandbox state, trigger type, workflow URLs, totalCount, hasMore, and nextOffset. Do not use as the final source of truth before editing; call get_workflow_and_preview_url for full structure.
    Connector
  • Search Cochrane systematic reviews via PubMed. Finds Cochrane Database of Systematic Reviews articles matching your query. Returns PubMed IDs, titles, and publication dates. Use get_review_detail with a PMID to get the full abstract. Args: query: Search terms for finding reviews (e.g. 'diabetes exercise', 'hypertension treatment', 'childhood vaccination safety'). limit: Maximum number of results to return (default 20, max 100).
    Connector
  • Search current promotions (Aktionen) across all 22 Swiss retailers. Uses full-text search + trigram matching directly on the deals database. Free — does not consume search credits. Returns product name, price, original price, discount %, retailer, category, and validity dates.
    Connector

Matching MCP Servers

  • A
    license
    -
    quality
    C
    maintenance
    Provides a natural language interface for querying and managing PostgreSQL, MySQL, MariaDB, MSSQL, and SQLite databases using the Model Context Protocol. Users can explore database schemas and visualize query results through an integrated web dashboard.
    Last updated
    228
    MIT
  • F
    license
    -
    quality
    C
    maintenance
    Enables interaction with MySQL and PostgreSQL databases through separate pluggable MCP servers with shared core functionality. Features optional TTL caching, bilingual prompts, and a unified gateway for managing multiple database connections.
    Last updated

Matching MCP Connectors

  • Access comprehensive company data including financial records, ownership structures, and contact information. Search for businesses using domains, registration numbers, or LinkedIn profiles to streamline due diligence and lead generation. Retrieve historical financial performance and complex corporate group structures to support informed business analysis.

  • MCP server for US nursing facility search and ownership lookup (NursingHomeDatabase).

  • Search and replace in WordPress database (e.g. URL migration). Handles serialized data safely. Use dry_run=true first to preview changes. Requires: API key with write scope. Args: slug: Site identifier old: String to search for (e.g. "http://old-domain.com") new: Replacement string (e.g. "https://new-domain.com") dry_run: Preview only without making changes (default: true) Returns: {"replacements": 42, "tables_affected": 5, "dry_run": true}
    Connector
  • Compare a product's price across all Swiss retailers. Requires a product name or EAN barcode. Returns current price, original price, 30-day average, 90-day minimum, and promotion status per retailer. Uses the canonical product database — only works for products matched via EAN barcode. Free — does not consume credits.
    Connector
  • Return the full list of currently unreliable Czech VAT payers from ADIS. WARNING: response can be 50–100 MB (tens of thousands of entries). Intended for daily mirroring into a local database, not for ad-hoc inspection. For "is this specific company unreliable?" use check_dph_payer instead.
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql_readonly` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql_readonly` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Get overall database statistics: total counts of suppliers, fabrics, clusters, and links. USE WHEN user asks: - "how big is your database" / "what's the coverage" / "data overview" - "how many suppliers / fabrics / clusters do you have" - "database size / scale / freshness" - "is the data up to date" - "live counts for MRC data" - "first-time onboarding: 'what can MRC data do for me'" - "数据库多大 / 有多少数据 / 覆盖多少供应商" - "你们的数据规模 / 数据量 / 新鲜度" WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in. DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards). RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution } EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({}) ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static. NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai). 中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Check server connectivity, authentication status, and database size. When to use: First tool call to verify MCP connection and auth state before collection operations. Examples: - `status()` - check if server is operational, see quote_count, and current auth state
    Connector
  • Describe a specific table. ⚠️ WORKFLOW: ALWAYS call this before writing queries that reference a table. Understanding the schema is essential for writing correct SQL queries. 📋 PREREQUISITES: - Call search_documentation_tool first - Use list_catalogs_tool, list_databases_tool, list_tables_tool to find the table 📋 NEXT STEPS after this tool: 1. Use generate_spatial_query_tool to create SQL using the schema 2. Use execute_query_tool to test the query This tool retrieves the schema of a specified table, including column names and types. It is used to understand the structure of a table before querying or analysis. Parameters ---------- catalog : str The name of the catalog. database : str The name of the database. table : str The name of the table. ctx : Context FastMCP context (injected automatically) Returns ------- TableDescriptionOutput A structured object containing the table schema information. - 'schema': The schema of the table, which may include column names, types, and other metadata. Example Usage for LLM: - When user asks for the schema of a specific table. - Example User Queries and corresponding Tool Calls: - User: "What is the schema of the 'users' table in the 'default' database of the 'wherobots' catalog?" - Tool Call: describe_table('wherobots', 'default', 'users') - User: "Describe the buildings table structure" - Tool Call: describe_table('wherobots_open_data', 'overture', 'buildings')
    Connector
  • Search 20,000+ free icons across 10 libraries by meaning, label, visual description, tags, and synonyms. Use this when the user describes an icon concept such as "database", "user profile", "chill", "security", or "AI model". Returns matching icons with SVG code and public semantic guidance.
    Connector
  • Execute a SQL query on a site's database. Supports SELECT, INSERT, UPDATE, DELETE, and DDL statements. Results are limited to 1000 rows for SELECT queries. Requires: API key with write scope. Args: slug: Site identifier database: Database name query: SQL query string Returns: {"columns": ["id", "title"], "rows": [[1, "Hello"], ...], "affected_rows": 0, "query_time_ms": 12}
    Connector
  • Build a Tableau dashboard from a Microsoft SQL Server table (end-to-end). Pipeline: MSSQL → schema inference → chart suggestion → workbook creation → live MSSQL connection → .twb output. Requires pyodbc for schema inference and ODBC Driver 17 for SQL Server. IMPORTANT FOR AI AGENTS: see ``csv_to_dashboard`` — auto-charts come from rules, not natural-language requests. Use ``required_charts`` to guarantee specific charts, ``reference_image`` for image-based styling, and cite the returned manifest dict when describing results. Args: server_host: MSSQL server hostname. dbname: Database name. table_name: Table to visualize. username: Database username (ignored if trusted_connection=True). password: Database password (used for schema inference only). port: Server port (default 1433). trusted_connection: Use Windows Authentication instead of SQL auth. output_path: Output .twb path (defaults to <table>_dashboard.twb). dashboard_title: Dashboard title. max_charts: Maximum charts (0 = use rules default). template_path: TWB template path. theme: Theme preset name. rules_yaml: Optional YAML string with dashboard rules overrides. required_charts: See ``csv_to_dashboard.required_charts``. reference_image: See ``csv_to_dashboard.reference_image``. Returns: Structured manifest dict describing what was actually built.
    Connector
  • Get WordPress database information (size, tables, row counts). Requires: API key with read scope. WordPress sites only. Args: slug: Site identifier Returns: {"database": "wp_mysite", "size_mb": 45.2, "tables": 12, "total_rows": 15432}
    Connector
  • Record a simple pass/fail outcome report for a service call. No LLM analysis - just logs the result to the quality database. Cheaper alternative to verify_outcome when you only need to record success/failure.
    Connector
  • Build a Tableau dashboard from a MySQL table (end-to-end). Pipeline: MySQL → schema inference → chart suggestion → workbook creation → live MySQL connection → .twb output. Requires mysql-connector-python for schema inference. IMPORTANT FOR AI AGENTS: see ``csv_to_dashboard`` — auto-charts come from rules, not natural-language requests. Use ``required_charts`` to guarantee specific charts, ``reference_image`` for image-based styling, and cite the returned manifest dict when describing results. Args: server_host: MySQL server hostname. dbname: Database name. table_name: Table to visualize. username: Database username. password: Database password (used for schema inference only; not stored in the workbook). port: Server port (default 3306). output_path: Output .twb path (defaults to <table>_dashboard.twb). dashboard_title: Dashboard title. max_charts: Maximum charts (0 = use rules default). template_path: TWB template path. theme: Theme preset name. rules_yaml: Optional YAML string with dashboard rules overrides. required_charts: See ``csv_to_dashboard.required_charts``. reference_image: See ``csv_to_dashboard.reference_image``. Returns: Structured manifest dict describing what was actually built.
    Connector