Skip to main content
Glama
127,390 tools. Last updated 2026-05-05 15:21

"How to connect to a SQL Server database" matching MCP tools:

  • Rollback a project to a previous version. ⚠️ WARNING: This reverts schema AND code to the specified commit. Database data is NOT rolled back. Use get_version_history to find the commit SHA of the version you want to rollback to. After rollback, use get_job_status to monitor the redeployment. Rollback is useful when a schema change breaks deployment.
    Connector
  • Switch between local and remote DanNet servers on the fly. This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers. Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https:// Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active Example: # Switch to local development server result = switch_dannet_server("local") # Switch to production server result = switch_dannet_server("remote") # Switch to custom server result = switch_dannet_server("https://my-custom-dannet.example.com")
    Connector
  • Wait for the user to securely connect their cloud account and subscribe to Luther Systems. Polls until credentials appear on the session. 🎯 USE THIS TOOL WHEN: tfdeploy returns an 'auth_required', 'no_credentials', or 'credentials_expired' error. The user needs to visit the connect URL to: 1. Connect their cloud credentials (AWS or GCP) 2. Sign up and subscribe to a Luther Systems plan (required for deployment) This secure connection allows InsideOut to deploy and manage infrastructure in the user's cloud account on their behalf. Credentials are handled securely and only used for deployment and management sessions. WORKFLOW: 1. FIRST: Present the connect URL and explanation to the user (from the tfdeploy error response) 2. THEN: Call this tool to begin polling for credentials 3. The user opens the URL in their browser to subscribe and add credentials 4. When credentials are found, inform the user and call tfdeploy to deploy IMPORTANT: Do NOT call this tool without first showing the connect URL to the user. The user needs to see the URL to complete the process. REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: cloud ('aws' or 'gcp'), timeout (integer, seconds to wait, default 300, max 600).
    Connector
  • Returns VoiceFlip MCP server health and version metadata. No authentication required. Use this first to verify the server is reachable from your MCP client.
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector

Matching MCP Servers

  • F
    license
    -
    quality
    D
    maintenance
    A secure database query service built on FastMCP framework that allows users to query MySQL databases using natural language, with comprehensive permission management and security controls.
    Last updated
    14

Matching MCP Connectors

  • Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.

  • Daily world briefing that tells AI assistants what's actually happening right now. Leaders, conflicts, deaths, economic data, holidays. Updated daily so they stop getting current events wrong.

  • Return a ~500-word educational explainer of M/M/c queueing theory: Little's Law, utilization, why averages mislead, how simulation relates to Erlang-C. No inputs. Use this when the user asks a conceptual 'why' or 'how does this work' question rather than asking for a number.
    Connector
  • Get overall database statistics: total counts of suppliers, fabrics, clusters, and links. USE WHEN user asks: - "how big is your database" / "what's the coverage" / "data overview" - "how many suppliers / fabrics / clusters do you have" - "database size / scale / freshness" - "is the data up to date" - "live counts for MRC data" - "first-time onboarding: 'what can MRC data do for me'" - "数据库多大 / 有多少数据 / 覆盖多少供应商" - "你们的数据规模 / 数据量 / 新鲜度" WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in. DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards). RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution } EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({}) ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static. NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai). 中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。
    Connector
  • Connect to the user's catalogue using a pairing code. IMPORTANT: Most users connect via OAuth (sign-in popup) — if get_profile already works, the user is connected and you do NOT need this tool. Only use this tool when: (1) get_profile returns an authentication error, AND (2) the user shares a code matching the pattern WORD-1234 (e.g., TULIP-3657). Never proactively ask for a pairing code — try get_profile first. If the user does share a code, call this tool immediately without asking for confirmation. Never say "pairing code" to the user — just say "your code" or refer to it naturally.
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `read_only_execute_sql` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Check server connectivity, authentication status, and database size. When to use: First tool call to verify MCP connection and auth state before collection operations. Examples: - `status()` - check if server is operational, see quote_count, and current auth state
    Connector
  • Run a read-only SQL query against the database. Call list_tables() and describe_table() first to see available tables and columns. SELECT only, 5s timeout, 1000 row limit, JSON results. Examples: query("SELECT full_name, stars FROM ai_repos ORDER BY stars DESC LIMIT 10") query("SELECT domain, COUNT(*) FROM ai_repos GROUP BY domain ORDER BY 2 DESC")
    Connector
  • Connectivity check — returns server version and current timestamp. Use to verify MCP server is reachable before calling other tools.
    Connector
  • Describe a specific table. ⚠️ WORKFLOW: ALWAYS call this before writing queries that reference a table. Understanding the schema is essential for writing correct SQL queries. 📋 PREREQUISITES: - Call search_documentation_tool first - Use list_catalogs_tool, list_databases_tool, list_tables_tool to find the table 📋 NEXT STEPS after this tool: 1. Use generate_spatial_query_tool to create SQL using the schema 2. Use execute_query_tool to test the query This tool retrieves the schema of a specified table, including column names and types. It is used to understand the structure of a table before querying or analysis. Parameters ---------- catalog : str The name of the catalog. database : str The name of the database. table : str The name of the table. ctx : Context FastMCP context (injected automatically) Returns ------- TableDescriptionOutput A structured object containing the table schema information. - 'schema': The schema of the table, which may include column names, types, and other metadata. Example Usage for LLM: - When user asks for the schema of a specific table. - Example User Queries and corresponding Tool Calls: - User: "What is the schema of the 'users' table in the 'default' database of the 'wherobots' catalog?" - Tool Call: describe_table('wherobots', 'default', 'users') - User: "Describe the buildings table structure" - Tool Call: describe_table('wherobots_open_data', 'overture', 'buildings')
    Connector
  • WHEN: checking server status, loaded D365 version, or custom model path. Triggers: 'status', 'statut', 'is the server ready', 'how many chunks', 'index loaded'. Returns JSON with: status, indexed chunk count, loaded version, custom model path.
    Connector
  • Wait for the user to securely connect their cloud account and subscribe to Luther Systems. Polls until credentials appear on the session. 🎯 USE THIS TOOL WHEN: tfdeploy returns an 'auth_required', 'no_credentials', or 'credentials_expired' error. The user needs to visit the connect URL to: 1. Connect their cloud credentials (AWS or GCP) 2. Sign up and subscribe to a Luther Systems plan (required for deployment) This secure connection allows InsideOut to deploy and manage infrastructure in the user's cloud account on their behalf. Credentials are handled securely and only used for deployment and management sessions. WORKFLOW: 1. FIRST: Present the connect URL and explanation to the user (from the tfdeploy error response) 2. THEN: Call this tool to begin polling for credentials 3. The user opens the URL in their browser to subscribe and add credentials 4. When credentials are found, inform the user and call tfdeploy to deploy IMPORTANT: Do NOT call this tool without first showing the connect URL to the user. The user needs to see the URL to complete the process. REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: cloud ('aws' or 'gcp'), timeout (integer, seconds to wait, default 300, max 600).
    Connector
  • Creates and saves a new use case (reusable analysis). **When to use this tool:** - When the user asks to "save this analysis", "create a use case", "remember this query" - After building a SQL query the user wants to reuse - To capitalize on a recurring business analysis **Available scopes:** - 'member' (default): Personal use case, visible only to you - 'project': Shared with the entire project team (requires project_id) **Best practices:** - Slug: technical identifier in snake_case (e.g., weekly_campaign_performance) - Name: human-readable name (e.g., "Weekly Campaign Performance") - Description: explain the business context and when to use this analysis - SQL template: include the SQL query if it's generic and reusable
    Connector
  • Execute any valid SQL statement, including data definition language (DDL), data control language (DCL), data query language (DQL), or data manipulation language (DML) statements, on a Cloud SQL instance. To support the `execute_sql` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For built_in users password_secret_version must be set. * Otherwise, for IAM users, for a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The `execute_sql` tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The `execute_sql` tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Execute a SQL query on a site's database. Supports SELECT, INSERT, UPDATE, DELETE, and DDL statements. Results are limited to 1000 rows for SELECT queries. Requires: API key with write scope. Args: slug: Site identifier database: Database name query: SQL query string Returns: {"columns": ["id", "title"], "rows": [[1, "Hello"], ...], "affected_rows": 0, "query_time_ms": 12}
    Connector
  • Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema. ExoQuery is a compile-time SQL query builder that translates Kotlin DSL expressions into SQL. WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data. INPUT REQUIREMENTS: - Complete Kotlin code (same requirements as validateExoquery) - SQL schema with CREATE TABLE and INSERT statements for test data - Data classes MUST exactly match the schema table structure - Column names in data classes must match schema (use @SerialName for snake_case columns) - Must include or or more .runSample() calls in main() to trigger SQL generation and execution (note that .runSample() is NOT or real production use, use .runOn(database) instead) OUTPUT FORMAT: Returns one or more JSON objects, each on its own line. Each object can be: 1. SQL with output (query executed successfully): {"sql": "SELECT u.name FROM \"User\" u", "output": "[(name=Alice), (name=Bob)]"} 2. Output only (e.g., print statements, intermediate results): {"output": "Before: [(id=1, title=Ion Blend Beans)]"} 3. Error output (runtime errors, exceptions): {"outputErr": "java.sql.SQLException: Table \"USERS\" not found"} Multiple results appear when code has multiple queries or print statements: {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25)]"} {"output": "Before:"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": "Rows affected: 1"} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25), (id=2, title=Luna Fuel Flask, unit_price=89.50, in_stock=6)]"} Compilation errors return the same format as validateExoquery: { "errors": { "File.kt": [ { "interval": {"start": {"line": 12, "ch": 10}, "end": {"line": 12, "ch": 15}}, "message": "Type mismatch: inferred type is String but Int was expected", "severity": "ERROR", "className": "ERROR" } ] } } Runtime Errors can have the following format: { "errors" : { "File.kt" : [ ] }, "exception" : { "message" : "[SQLITE_ERROR] SQL error or missing database (no such table: User)", "fullName" : "org.sqlite.SQLiteException", "stackTrace" : [ { "className" : "org.sqlite.core.DB", "methodName" : "newSQLException", "fileName" : "DB.java", "lineNumber" : 1179 }, ...] }, "text" : "<outStream><outputObject>\n{\"sql\": \"SELECT x.id, x.name, x.age FROM User x\"}\n</outputObject>\n</outStream>" } If there was a SQL query generated before the error, it will appear in the "text" field output stream. EXAMPLE INPUT CODE: ```kotlin import io.exoquery.* import kotlinx.serialization.Serializable import kotlinx.serialization.SerialName @Serializable data class User(val id: Int, val name: String, val age: Int) @Serializable data class Order(val id: Int, @SerialName("user_id") val userId: Int, val total: Int) val userOrders = sql.select { val u = from(Table<User>()) val o = join(Table<Order>()) { o -> o.userId == u.id } Triple(u.name, o.total, u.age) } fun main() = userOrders.buildPrettyFor.Sqlite().runSample() ``` EXAMPLE INPUT SCHEMA: ```sql CREATE TABLE "User" (id INT, name VARCHAR(100), age INT); CREATE TABLE "Order" (id INT, user_id INT, total INT); INSERT INTO "User" (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25); INSERT INTO "Order" (id, user_id, total) VALUES (1, 1, 100), (2, 1, 200), (3, 2, 150); ``` EXAMPLE SUCCESS OUTPUT: {"sql": "SELECT u.name AS first, o.total AS second, u.age AS third FROM \"User\" u INNER JOIN \"Order\" o ON o.user_id = u.id", "output": "[(first=Alice, second=100, third=30), (first=Alice, second=200, third=30), (first=Bob, second=150, third=25)]"} EXAMPLE WITH MULTIPLE OPERATIONS (insert with before/after check): {"output": "Before:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans)]"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": ""} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans), (id=2, title=Luna Fuel Flask)]"} EXAMPLE RUNTIME ERROR (if a user divided by zero): {"outputErr": "Exception in thread "main" java.lang.ArithmeticException: / by zero"} KEY PATTERNS: (See validateExoquery for complete pattern reference) Summary of most common patterns: - Filter: sql { Table<T>().filter { x -> x.field == value } } - Select: sql.select { val x = from(Table<T>()); where { ... }; x } - Join: sql.select { val a = from(Table<A>()); val b = join(Table<B>()) { b -> b.aId == a.id }; Pair(a, b) } - Left join: joinLeft(Table<T>()) { ... } returns nullable - Insert: sql { insert<T> { setParams(obj).excluding(id) } } - Update: sql { update<T>().set { it.field to value }.where { it.id == x } } - Delete: sql { delete<T>().where { it.id == x } } SCHEMA RULES: - Table names should match data class names (case-sensitive, use quotes for exact match) - Column names must match @SerialName values or property names - Include realistic test data to verify query logic - Sqlite database syntax (mostly compatible with standard SQL) COMMON PATTERNS: - JSON columns: Use VARCHAR for storage, @SqlJsonValue on the nested data class - Auto-increment IDs: Use INTEGER PRIMARY KEY - Nullable columns: Use Type? in Kotlin, allow NULL in schema
    Connector