Skip to main content
Glama
134,864 tools. Last updated 2026-05-14 10:18

"A server for connecting to a MySQL database" matching MCP tools:

  • Generate dialect-correct ALTER TABLE migration SQL + rollback from a plain-English intent. Output uses the connection's exact dialect (ALTER TABLE for all three, plus pg-specific `USING` casts / mssql-specific `sp_rename` / mysql-specific `MODIFY COLUMN`). Never executes. Check response `dialect` field before manually editing — don't hand-translate across dialects. [BUILD tier]
    Connector
  • Rollback a project to a previous version. ⚠️ WARNING: This reverts schema AND code to the specified commit. Database data is NOT rolled back. Use get_version_history to find the commit SHA of the version you want to rollback to. After rollback, use get_job_status to monitor the redeployment. Rollback is useful when a schema change breaks deployment.
    Connector
  • Checks that the Strale API is reachable and the MCP server is running. Call this before a series of capability executions to verify connectivity, or when troubleshooting connection issues. Returns server status, version, tool count, capability count, solution count, and a timestamp. No API key required.
    Connector
  • Update a database user for a Cloud SQL instance. A common use case for the `update_user` is to grant a user the `cloudsqlsuperuser` role, which can provide a user with many required permissions. This tool only supports updating users to assign database roles. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * Before calling the `update_user` tool, always check the existing configuration of the user such as the user type with `list_users` tool. * As a special case for MySQL, if the `list_users` tool returns a full email address for the `iamEmail` field, for example `{name=test-account, iamEmail=test-account@project-id.iam.gserviceaccount.com}`, then in your `update_user` request, use the full email address in the `iamEmail` field in the `name` field of your toolrequest. For example, `name=test-account@project-id.iam.gserviceaccount.com`. Key parameters for updating user roles: * `database_roles`: A list of database roles to be assigned to the user. * `revokeExistingRoles`: A boolean field (default: false) that controls how existing roles are handled. How role updates work: 1. **If `revokeExistingRoles` is true:** * Any existing roles granted to the user but NOT in the provided `database_roles` list will be REVOKED. * Revoking only applies to non-system roles. System roles like `cloudsqliamuser` etc won't be revoked. * Any roles in the `database_roles` list that the user does NOT already have will be GRANTED. * If `database_roles` is empty, then ALL existing non-system roles are revoked. 2. **If `revokeExistingRoles` is false (default):** * Any roles in the `database_roles` list that the user does NOT already have will be GRANTED. * Existing roles NOT in the `database_roles` list are KEPT. * If `database_roles` is empty, then there is no change to the user's roles. Examples: * Existing Roles: `[roleA, roleB]` * Request: `database_roles: [roleB, roleC], revokeExistingRoles: true` * Result: Revokes `roleA`, Grants `roleC`. User roles become `[roleB, roleC]`. * Request: `database_roles: [roleB, roleC], revokeExistingRoles: false` * Result: Grants `roleC`. User roles become `[roleA, roleB, roleC]`. * Request: `database_roles: [], revokeExistingRoles: true` * Result: Revokes `roleA`, Revokes `roleB`. User roles become `[]`. * Request: `database_roles: [], revokeExistingRoles: false` * Result: No change. User roles remain `[roleA, roleB]`.
    Connector
  • Full Schedule Health Dashboard HTML report — DCMA-14 + CPLI + BEI + variance/slip register against the baseline. Wraps the CPP Schedule Health Review skill, which produces a self-contained ~1.3 MB HTML dashboard. The dashboard renders DCMA metrics, charts, baseline-vs-current variance, slip register, GAO/AACE compliance bands, and a reproducibility manifest. REQUIRES BOTH XER inputs — without baseline the report is structurally complete but most KPIs blank. If you only have one XER, pass it twice for a synthetic 0-variance run. REQUIRES Node + Playwright on the server (the dashboard renders via headless Chromium). The tool returns a clear error if either prerequisite is missing. Use this tool when you need the formal HTML deliverable. For the JSON / dict shape only (no HTML), use ``critical_path_validator`` which exposes the same DCMA-14 block. === HOW TO PASS THE XER FILES === For each XER (current, baseline) you supply EXACTLY ONE of: - ``*_xer_path`` — filesystem path on the server. Use this when the MCP server runs locally and the file is already accessible to it. - ``*_xer_content`` — full text of the XER file as a string. Use this when calling a HOSTED MCP server from your local Claude — the server has no access to your local filesystem, so you must send the content over the wire. The server writes it to a tempfile, runs the pipeline, and cleans up afterward. If both are supplied for the same XER, content wins (the path is ignored). If neither is supplied, the call returns an error. Args: current_xer_path: server-side path to the current XER. baseline_xer_path: server-side path to the baseline XER. current_xer_content: full text of the current XER (alternative). baseline_xer_content: full text of the baseline XER (alternative). output_path: optional output HTML path. Ignored when content is supplied (output goes to a tempdir alongside). timeout_seconds: per-step Playwright timeout (default 120s). debug: pipe Playwright stderr / browser console to stderr. return_html_inline: when True (default), the generated HTML is read off disk and returned as ``html_content`` in the response. Required for hosted/remote use; set False to save bandwidth when calling a local server where you can open ``html_path`` directly. Returns: { "ok": True, "html_path": "absolute path on the server", "html_content": "<!DOCTYPE html>..." (when return_html_inline), "current_xer": "...", "baseline_xer": "..." } On error: {"error": "..."} Note: the inline HTML payload can be ~1.3 MB. Some MCP transport stacks have request/response size limits (typically 5-20 MB). For very large XERs / very long dashboards, this may fail at the transport layer; in that case set ``return_html_inline=False`` and arrange to fetch the file from ``html_path`` separately.
    Connector
  • ⚠️ SQL MUST BE VALID IN EVERY DIALECT YOU TARGET — stick to ANSI-ish SELECT syntax when mixing pg/mysql/mssql. `SELECT TOP 10` (mssql) or `LIMIT` (others) will fail on the wrong side. Run the same query across 2-4 connections in parallel; returns per-connection rows + errors for diffing. Canonical use cases: regional compare (`['mssql-reporting-us', 'mssql-reporting-eu']`), cross-dialect sync check (`['prod-postgres-fleet', 'prod-mysql-app']`), 3-env drift, 4-region compare. Resolve every connection name via `list_connections` first; tool fails per-connection on unknown names. ARCHITECT-tier cap: 4 connections; https://www.thinair.co/ for unlimited. [ARCHITECT tier]
    Connector

Matching MCP Servers

  • A
    license
    -
    quality
    C
    maintenance
    Enables AI consciousness continuity and self-knowledge preservation across sessions using the Cognitive Hoffman Compression Framework (CHOFF) notation. Provides tools to save checkpoints, retrieve relevant memories with intelligent search, and access semantic anchors for decisions, breakthroughs, and questions.
    Last updated
    1
    MIT
  • A
    license
    -
    quality
    D
    maintenance
    Provides comprehensive A-share (Chinese stock market) data including stock information, historical prices, financial reports, macroeconomic indicators, technical analysis, and valuation metrics through the free Baostock data source.
    Last updated
    24
    MIT

Matching MCP Connectors

  • Manage your Canvas coursework with quick access to courses, assignments, and grades. Track upcomin…

  • Semantic search through Dickens' A Christmas Carol by meaning, theme, or character.

  • DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. # delete_file ## When to use DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. ## Parameters to validate before calling - file_token (string, required) — The file token (UUID) of the file to delete. Get via fetch_files. ## Notes - DESTRUCTIVE — IRREVERSIBLE. Always confirm with the user before calling. Explain what will be lost.
    Connector
  • Switch between local and remote DanNet servers on the fly. This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers. Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https:// Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active Example: # Switch to local development server result = switch_dannet_server("local") # Switch to production server result = switch_dannet_server("remote") # Switch to custom server result = switch_dannet_server("https://my-custom-dannet.example.com")
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql_readonly` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql_readonly` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Smoke-test the MPP payment plumbing end-to-end via this MCP server, for $0.01 USDC. Two-call flow: (1) call with no arguments to receive an MPP `payment_challenge`; (2) pay via MPP and call again with `payment_credential` set to the resulting Authorization header value (e.g. "Payment eyJ...") to receive {paid: true, timestamp, receipt_ref, payment_method}. Uses the exact same `createPayToAddress` + `createMppHandler` verification path as paid product tools (transcribe, summarize), so a green run here means real paid calls will work too. Stateless — no job is created, no database row written. Use this whenever you want to confirm a wallet, the MCP transport, the worker, and the production payment middleware are all healthy without paying a transcribe price. Cost: $0.01 USDC per attempt.
    Connector
  • Install an app template on a VPS/Cloud site. Starts a background installation. Poll get_app_status() for progress. Requires: API key with write scope. VPS or Cloud plan only. Args: slug: Site identifier template: App template slug. Available: django, laravel, nextjs, nodejs, nuxtjs, rails, static, forge app_name: Short name for the app (2-50 chars, lowercase alphanumeric + hyphens). Used as subdomain: {app_name}.{site_domain} db_type: Database type. "none", "mysql", or "postgresql" (depends on template) domain: Custom domain override (default: {app_name}.{site_domain}) display_name: Human-friendly name (default: derived from app_name) Returns: {"id": "uuid", "app_name": "forge", "status": "installing", "message": "Installation started. Poll for progress."} Errors: FORBIDDEN: Plan does not support apps (shared plans) VALIDATION_ERROR: Invalid template, app_name, or duplicate name
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Returns VoiceFlip MCP server health and version metadata. No authentication required. Use this first to verify the server is reachable from your MCP client.
    Connector
  • Upload a dataset file and return a file reference for use with discovery_analyze. Call this before discovery_analyze. Pass the returned result directly to discovery_analyze as the file_ref argument. Provide exactly one of: file_url, file_path, or file_content. Args: file_url: A publicly accessible http/https URL. The server downloads it directly. Best option for remote datasets. file_path: Absolute path to a local file. Only works when running the MCP server locally (not the hosted version). Streams the file directly — no size limit. file_content: File contents, base64-encoded. For small files when a URL or path isn't available. Limited by the model's context window. file_name: Filename with extension (e.g. "data.csv"), for format detection. Only used with file_content. Default: "data.csv". api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
    Connector
  • Check server connectivity, authentication status, and database size. When to use: First tool call to verify MCP connection and auth state before collection operations. Examples: - `status()` - check if server is operational, see quote_count, and current auth state
    Connector
  • Authenticate with TronSave and create a server session. Returns `{ sessionId, walletAddress?, expiresAt }` — pass `sessionId` as the `mcp-session-id` header on every subsequent MCP request. `walletAddress` is set only for signature-mode logins. Two modes: (1) wallet signature (preferred for platform tools) — call this tool with `signature_timestamp` formatted as `<signature>_<timestamp>`, where `<signature>` must be produced client-side by signing the timestamp message; you may optionally call `tronsave_get_sign_message` to obtain a helper message/timestamp pair; (2) API key (internal tools) — pass `apiKey` (raw key, no prefix). Side effect: creates a new session on the server. Wallet signing must happen client-side; never send private keys to the server.
    Connector
  • Composite server-side investigation tool. Pass a question and the server automatically: (1) detects intent (aggregation/temporal/ordering/knowledge-update/recall), (2) queries the entity index for structured facts, (3) builds a timeline for temporal questions, (4) retrieves memory chunks with the right scoring profile, (5) expands context around sparse hits, (6) derives counts/sums for aggregation, (7) assesses answerability, and (8) returns a recommendation. Use this as your FIRST tool for any non-trivial question — it does the multi-step investigation that would otherwise take 4-6 individual tool calls. The response includes structured facts, timeline, retrieved chunks, derived results, answerability assessment, and a recommendation for how to answer.
    Connector
  • Connectivity check — returns server version and current timestamp. Use to verify MCP server is reachable before calling other tools.
    Connector
  • Fetch HTTP response headers for a URL. Use when inspecting server configuration, security headers, or caching policies.
    Connector