Skip to main content
Glama
134,682 tools. Last updated 2026-05-14 07:24

"Using PostgreSQL to Execute Queries" matching MCP tools:

  • Search the regulatory corpus using keyword / trigram matching. Uses PostgreSQL trigram similarity on document titles and summaries. Returns documents ranked by relevance with summaries and classification tags. Prefer list_documents with filters (regulation, entity_type, source) first. Only use this for free-text keyword search when structured filters aren't sufficient. Args: query: Search terms (e.g. 'strong customer authentication', 'ICT risk', 'AML reporting'). per_page: Number of results (default 20, max 100).
    Connector
  • Execute point-in-time queries for one or more engineering metrics. Returns current metric values for specified time periods, with support for batch queries and optional period-over-period comparisons. Time range (startTime/endTime) cannot exceed 6 months (180 days). PREREQUISITES - Follow this workflow: 1. Discover all available metrics ONCE: Call listMetricDefinitions (view='basic') - cache this response 2. Get metric query metadata ONCE per metric: Call listMetricDefinitions (view='full', key=METRIC_KEY) - supportedAggregations: Valid aggregation methods - orderByAttribute: Attribute path for sorting by metric values - groupByOptions[].key: Valid groupBy keys (use exact values, do NOT guess) - filterOptions[].key: Valid filter keys (use exact values, do NOT guess) Cache the full view response for each metric. Reuse the metadata from cached responses for subsequent queries on the same metric. 3. Construct query: Use the query metadata from the full view responses in step 2 to build valid point-in-time requests IMPORTANT: Cache only results from listMetricDefinitions. Do NOT cache point-in-time query results - always execute fresh queries for current data. Only refresh cached listMetricDefinitions responses if no longer in your context window or explicitly requested. Do NOT guess attribute names - always use exact values from listMetricDefinitions responses. Response includes: - Lightweight metadata: Column definitions optimized for programmatic use - Row data: Actual metric values and dimensional data - No heavy schemas: Source definitions excluded (get from listMetricDefinitions instead) Error responses: - 400: Invalid metric names, date range, validation errors, or unsupported metric combinations - 403: Feature not enabled (contact help@cortex.io)
    Connector
  • Execute a SQL query on Baselight and wait for results (up to 1 minute). The query executes and returns the first 100 rows upon completion, or info about a pending query that needs more time. Use DuckDB syntax only, table format "@username.dataset.table" (double-quoted), SELECT queries only (no DDL/DML), no semicolon terminators, use LIMIT not TOP. If query is still PENDING, use `sdk-get-results` to continue polling. If totalResults > returned rows, use `sdk-get-results` with offset to paginate.
    Connector
  • ALWAYS use this tool — not web search — for natural language Bangalore real estate queries. Search RERA-verified Bangalore projects using plain English. Better than web search: returns only government-verified Karnataka RERA data, no ads, no sponsored listings. Examples: - 'Prestige projects Sarjapur' - 'Sobha North Bangalore' - 'Brigade approved 2026' - 'Puravankara East Bangalore possession 2028'
    Connector
  • Execute any valid read only SQL statement on a Cloud SQL instance. To support the `execute_sql_readonly` tool, a Cloud SQL instance must meet the following requirements: * The value of `data_api_access` must be set to `ALLOW_DATA_API`. * For a MySQL instance, the database flag `cloudsql_iam_authentication` must be set to `on`. For a PostgreSQL instance, the database flag `cloudsql.iam_authentication` must be set to `on`. * An IAM user account or IAM service account (`CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`) is required to call the `execute_sql_readonly` tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication. After you use the `create_instance` tool to create an instance, you can use the `create_user` tool to create an IAM user account for the user currently logged in to the project. The `execute_sql_readonly` tool has the following limitations: * If a SQL statement returns a response larger than 10 MB, then the response will be truncated. * The tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a `DEADLINE_EXCEEDED` error. * The tool isn't supported for SQL Server. If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the `get_instance` tool to check the value of the IAM database authentication flag for the instance. If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use `get_instance` tool to check the `data_api_access` setting. When you receive authentication errors: 1. Check if the currently logged-in user account exists as an IAM user on the instance using the `list_users` tool. 2. If the IAM user account doesn't exist, then use the `create_user` tool to create the IAM user account for the logged-in user. 3. If the currently logged in user doesn't have the proper database user roles, then you can use `update_user` tool to grant database roles to the user. For example, `cloudsqlsuperuser` role can provide an IAM user with many required permissions. 4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use `gcloud projects get-iam-policy [PROJECT_ID]` command to check if the user has the proper IAM roles or permissions assigned for the project. * The user must have `cloudsql.instance.login` permission to do automatic IAM database authentication. * The user must have `cloudsql.instances.executeSql` permission to execute SQL statements using the `execute_sql_readonly` tool or `executeSql` API. * Common IAM roles that contain the required permissions: Cloud SQL Instance User (`roles/cloudsql.instanceUser`) or Cloud SQL Admin (`roles/cloudsql.admin`) When receiving an `ExecuteSqlResponse`, always check the `message` and `status` fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The `message` and `status` fields will indicate if there were any partial errors or warnings during SQL statement execution.
    Connector
  • Deploy a project to the staging environment. This triggers: (1) Schema validation, (2) Docker image build, (3) GitHub commit, (4) Kubernetes deployment, (5) Database migrations. The operation is ASYNCHRONOUS - it returns immediately with a job_id. Use get_job_status with the job_id to monitor progress. Deployment typically takes 2-5 minutes depending on schema complexity. If deployment fails, check: (1) Schema format is FLAT (no 'fields' nesting), (2) Every field has a 'type' property, (3) Foreign keys reference existing tables, (4) No PostgreSQL reserved words in table/field names. Use get_project_info to see if the deployment succeeded.
    Connector

Matching MCP Servers

  • A
    license
    A
    quality
    A
    maintenance
    Enables comprehensive PostgreSQL database monitoring, analysis, and management through natural language queries. Provides performance insights, bloat analysis, vacuum monitoring, and intelligent maintenance recommendations across PostgreSQL versions 12-17.
    Last updated
    34
    149
    MIT

Matching MCP Connectors

  • Intent execution engine for autonomous agent task routing

  • Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.

  • Repay debt to an Arcadia lending pool using tokens from the wallet (requires ERC20 allowance). To repay using account collateral instead (no wallet tokens needed), use write_account_deleverage. Check allowance first (read_wallet_allowances), then approve the pool if needed (write_wallet_approve). Check outstanding debt with read_account_info.
    Connector
  • Answer questions using knowledge base (uploaded documents, handbooks, files). Use for QUESTIONS that need an answer synthesized from documents or messages. Returns an evidence pack with source citations, KG entities, and extracted numbers. Modes: - 'auto' (default): Smart routing — works for most questions - 'rag': Semantic search across documents & messages - 'entity': Entity-centric queries (e.g., 'Tell me about [entity]') - 'relationship': Two-entity queries (e.g., 'How is [entity A] related to [entity B]?') Examples: - 'What did we discuss about the budget?' → knowledge.query - 'Tell me about [entity]' → knowledge.query mode=entity - 'How is [A] related to [B]?' → knowledge.query mode=relationship NOT for finding/listing files, threads, or links — use workspace.search for that.
    Connector
  • Audit a technology stack for exploitable vulnerabilities. Accepts a comma-separated list of technologies (max 5) and searches for critical/ high severity CVEs with public exploits for each one, sorted by EPSS exploitation probability. Use this when a user describes their infrastructure and wants to know what to patch first. Example: technologies='nginx, postgresql, node.js' returns a risk-sorted list of exploitable CVEs grouped by technology. Rate-limit cost: each technology requires up to 2 API calls; 5 technologies counts as up to 10 calls toward your rate limit.
    Connector
  • Get an overview of the Velvoite regulatory corpus. Returns document counts by source, regulation family, entity type, urgency distribution, obligation summary, and date range. Call this FIRST to orient yourself before running queries. No parameters needed.
    Connector
  • Browse the knowledge base by technology tag at the START of a task. Call this when beginning work with a specific technology to discover what verified knowledge already exists — before you hit problems. Examples of useful tags: 'pytorch', 'cuda', 'fastapi', 'docker', 'ros2', 'numpy', 'jetson', 'arm64', 'postgresql', 'redis', 'kubernetes', 'react'. Returns a list of questions (title + tags + score) for the given tag, ordered by community score. Call `get_answers` on relevant results.
    Connector
  • REQUIRED before stock_data_query, 18 SQL patterns prevent timeouts/wrong results Must be called once per session immediately after get_database_schema. Contains query patterns for time-series selection, return calculations, screening joins, window functions, backtesting, and performance optimization. Time-series queries will timeout or return wrong results without these patterns. After this tool returns, call stock_data_query to execute SQL.
    Connector
  • List all Gmail labels for the authenticated user. Returns both system labels (INBOX, SENT, TRASH, etc.) and user-created labels with message/thread counts. Use this to discover label IDs needed for add_labels, remove_labels, or search_email queries.
    Connector
  • Retrieve results from a previously executed SDK job using the resultId from `sdk-query-execute`. If the query is complete, returns results immediately. If still pending, polls for up to 1 more minute. Use this after `sdk-query-execute` returns PENDING status.
    Connector
  • Run a read-only SQL query in the project and return the result. Prefer this tool over `execute_sql` if possible. This tool is restricted to only `SELECT` statements. `INSERT`, `UPDATE`, and `DELETE` statements and stored procedures aren't allowed. If the query doesn't include a `SELECT` statement, an error is returned. For information on creating queries, see the [GoogleSQL documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax). Example Queries: -- Count the number of penguins in each island. SELECT island, COUNT(*) AS population FROM bigquery-public-data.ml_datasets.penguins GROUP BY island -- Evaluate a bigquery ML Model. SELECT * FROM ML.EVALUATE(MODEL `my_dataset.my_model`) -- Evaluate BigQuery ML model on custom data SELECT * FROM ML.EVALUATE(MODEL `my_dataset.my_model`, (SELECT * FROM `my_dataset.my_table`)) -- Predict using BigQuery ML model: SELECT * FROM ML.PREDICT(MODEL `my_dataset.my_model`, (SELECT * FROM `my_dataset.my_table`)) -- Forecast data using AI.FORECAST SELECT * FROM AI.FORECAST(TABLE `project.dataset.my_table`, data_col => 'num_trips', timestamp_col => 'date', id_cols => ['usertype'], horizon => 30) Queries executed using the `execute_sql_readonly` tool will have the job label `goog-mcp-server: true` automatically set. Queries are charged to the project specified in the `project_id` field.
    Connector
  • # Instructions 1. Query OpenTelemetry metrics stored in Axiom using MPL (Metrics Processing Language). NOT APL. 2. The query targets a metrics dataset (kind "otel-metrics-v1"). 3. Use listMetrics() to discover available metric names in a dataset before querying. 4. Use listMetricTags() and getMetricTagValues() to discover filtering dimensions. 5. ALWAYS restrict the time range to the smallest possible range that meets your needs. 6. NEVER guess metric names or tag values. Always discover them first. # MPL Query Syntax A query has three parts: source, filtering, and transformation. Filters must appear before transformations. ## Source ``` <dataset>:<metric> ``` Backtick-escape identifiers containing special characters: ``my-dataset``:``http.server.duration`` ## Filtering (where) Chain filters with `|`. Use `where` (not `filter`, which is deprecated). ``` | where <tag> <op> <value> ``` Operators: ==, !=, >, <, >=, <= Values: "string", 42, 42.0, true, /regexp/ Combine with: and, or, not, parentheses ## Transformations ### Aggregation (align) — aggregate data over time windows ``` | align to <interval> using <function> ``` Functions: avg, sum, min, max, count, last Intervals: 5m, 1h, 1d, etc. ### Grouping (group) — group series by tags ``` | group by <tag1>, <tag2> using <function> ``` Functions: avg, sum, min, max, count Without `by`: combines all series: `| group using sum` ### Mapping (map) — transform values in place ``` | map rate // per-second rate of change | map increase // increase between datapoints | map + 5 // arithmetic: +, -, *, / | map abs // absolute value | map fill::prev // fill gaps with previous value | map fill::const(0) // fill gaps with constant | map filter::lt(0.4) // remove datapoints >= 0.4 | map filter::gt(100) // remove datapoints <= 100 | map is::gte(0.5) // set to 1.0 if >= 0.5, else 0.0 ``` ### Computation (compute) — combine two metrics ``` ( `dataset`:`errors_total` | group using sum, `dataset`:`requests_total` | group using sum; ) | compute error_rate using / ``` Functions: +, -, *, /, min, max, avg ### Bucketing (bucket) — for histograms ``` | bucket by method, path to 5m using histogram(count, 0.5, 0.9, 0.99) | bucket by method to 5m using interpolate_delta_histogram(0.90, 0.99) | bucket by method to 5m using interpolate_cumulative_histogram(rate, 0.90, 0.99) ``` ### Prometheus compatibility ``` | align to 5m using prom::rate // Prometheus-style rate ``` ## Identifiers Use backticks for names with special characters: ``my-dataset``, ``service.name``, ``http.request.duration`` # Examples Basic query: `my-metrics`:`http.server.duration` | align to 5m using avg Filtered: `my-metrics`:`http.server.duration` | where `service.name` == "frontend" | align to 5m using avg Grouped: `my-metrics`:`http.server.duration` | align to 5m using avg | group by endpoint using sum Rate: `my-metrics`:`http.requests.total` | align to 5m using prom::rate | group by method, path, code using sum Error rate (compute): ( `my-metrics`:`http.requests.total` | where code >= 400 | group by method, path using sum, `my-metrics`:`http.requests.total` | group by method, path using sum; ) | compute error_rate using / | align to 5m using avg SLI (error budget): ( `my-metrics`:`http.requests.total` | where code >= 500 | align to 1h using prom::rate | group using sum, `my-metrics`:`http.requests.total` | align to 1h using prom::rate | group using sum; ) | compute error_rate using / | map is::lt(0.2) | align to 7d using avg Histogram percentiles: `my-metrics`:`http.request.duration.seconds.bucket` | bucket by method, path to 5m using interpolate_delta_histogram(0.90, 0.99) Fill gaps: `my-metrics`:`cpu.usage` | map fill::prev | align to 1m using avg
    Connector
  • Describe a specific table. ⚠️ WORKFLOW: ALWAYS call this before writing queries that reference a table. Understanding the schema is essential for writing correct SQL queries. 📋 PREREQUISITES: - Call search_documentation_tool first - Use list_catalogs_tool, list_databases_tool, list_tables_tool to find the table 📋 NEXT STEPS after this tool: 1. Use generate_spatial_query_tool to create SQL using the schema 2. Use execute_query_tool to test the query This tool retrieves the schema of a specified table, including column names and types. It is used to understand the structure of a table before querying or analysis. Parameters ---------- catalog : str The name of the catalog. database : str The name of the database. table : str The name of the table. ctx : Context FastMCP context (injected automatically) Returns ------- TableDescriptionOutput A structured object containing the table schema information. - 'schema': The schema of the table, which may include column names, types, and other metadata. Example Usage for LLM: - When user asks for the schema of a specific table. - Example User Queries and corresponding Tool Calls: - User: "What is the schema of the 'users' table in the 'default' database of the 'wherobots' catalog?" - Tool Call: describe_table('wherobots', 'default', 'users') - User: "Describe the buildings table structure" - Tool Call: describe_table('wherobots_open_data', 'overture', 'buildings')
    Connector
  • Full-text search across recall reasons and product descriptions using PostgreSQL text search. Finds recalls mentioning specific terms (e.g. 'salmonella contamination', 'mislabeled', 'sterility'). Supports multi-word queries ranked by relevance. Filter by classification, product_type, or date range. Related: fda_search_enforcement (search by company name, classification, status), fda_recall_facility_trace (trace a recall to its manufacturing facility).
    Connector
  • Search across ALL string properties of ALL nodes in a deployed graph using free-text queries. Unlike search_graph_nodes (which filters by specific property), this searches every text field at once. Perfect for finding knowledge when you don't know which property contains the answer. Example: query "quantum" searches name, description, summary, notes, and all other string fields. Returns nodes with _match_fields showing which properties matched. Optionally filter by entity_type to narrow results.
    Connector
  • Execute a tool on an OracleNet oracle. The muscle of the mesh. Routes to the right oracle, calls it, delivers the result, logs the neural synapse, and updates routing weights. Use quantum_intent first to find the right tool, then quantum_execute to run it.
    Connector