Skip to main content
Glama
127,309 tools. Last updated 2026-05-05 14:03

"How to import Shopify sales data into a database and create entries in Craft CMS" matching MCP tools:

  • Create up to 50 tags in a project in one call. Returns per-item results (created / skipped). Duplicates are matched case-insensitively on name. Confirm with the user before calling — this mutates project data.
    Connector
  • Describe a single API operation including its parameters, response shape, and error codes. WHEN TO USE: - Inspecting an endpoint's full contract before calling it. - Discovering which error codes an endpoint can return and how to recover. RETURNS: - operation: Full discovery record for the endpoint. - parameters: Raw OpenAPI parameter definitions. - request_body: Body schema (when applicable). - responses: Map of status code → description/schema. - linked_error_codes: Error catalog entries the endpoint can emit. EXAMPLE: Agent: "How do I call the screen audience endpoint?" describe_endpoint({ path: "/v1/data/screens/{screenId}/audience", method: "GET" })
    Connector
  • Bulk-create subnames under a parent ENS name in a single transaction. Designed for agent fleet deployment — create identities like agent001.company.eth, agent002.company.eth, etc. Each subname can have its own owner and records (addresses, text records). All N subnames bundle into ONE NameWrapper.multicall transaction (all-or-nothing). All record updates across all subnames bundle into ONE Resolver.multicall transaction. If the parent is unwrapped, the recipe prepends a one-time wrap setup (approve + wrapETH2LD) — after that, every subsequent batch on the same parent is a single signature. Returns a flat steps[] array — each step is one wallet signature, in order. Subnames are free to create; only gas costs apply.
    Connector
  • Query the Trillboards API changelog for recent changes, breaking changes, deprecations, and fixes. WHEN TO USE: - Check what has changed in the API before upgrading an integration. - Find breaking changes since a specific date. - Discover new features added to a specific API surface. PARAMETERS: - since (YYYY-MM-DD, optional): Only entries dated on or after this date. Unreleased entries are always included. - type (string, optional): Filter by change category. Accepts: "breaking" → changed + removed entries "additive" → added entries "deprecation" → deprecated entries "fix" → fixed entries Can be comma-separated: "breaking,deprecation" RETURNS: - object: "list" - data: Array of { version, date, type, surface, description } - total: Number of matching entries. EXAMPLE: Agent: "What broke since April 1st?" query_changelog({ since: "2026-04-01", type: "breaking" })
    Connector
  • Execute arbitrary JS in the project's isolate runtime. The SDK is pre-imported into local scope — `db`, `auth`, `email`, `storage`, `ai`, `agent`, `cache`, `vector`, `memory`, `tasks`, `scheduler`, `browser`, `images`, `run`, `approval`, `mcp` are ready to use without import. `process.env` and global `fetch` also work. `return` to produce the `result` field. Top-level `import` and dynamic `import('hatchable')` are NOT supported in this REPL — the bindings above are how you reach the SDK. Use this as a REPL: probe the database, verify a computation, test an API shape before committing it to a file. Nothing is persisted — the snippet runs once and disappears. Caps: 5s default timeout (max 30s), 256 KB max source length. Example: run_code({ project_id, code: ` const { rows } = await db.query("SELECT count(*) FROM users"); return rows[0]; `})
    Connector
  • Get overall database statistics: total counts of suppliers, fabrics, clusters, and links. USE WHEN user asks: - "how big is your database" / "what's the coverage" / "data overview" - "how many suppliers / fabrics / clusters do you have" - "database size / scale / freshness" - "is the data up to date" - "live counts for MRC data" - "first-time onboarding: 'what can MRC data do for me'" - "数据库多大 / 有多少数据 / 覆盖多少供应商" - "你们的数据规模 / 数据量 / 新鲜度" WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in. DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards). RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution } EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({}) ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static. NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai). 中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Shopify MCP Pack — wraps the Shopify Admin REST API (2024-01)

  • Medicare spending, chronic conditions, hospital quality, readmissions, and enrollment

  • Search FDA import refusals (Compliance Dashboard data, not available in openFDA API). Import refusals indicate products detained at the US border. Filter by company name, FEI number, country code (e.g., CN, IN for major API source countries), or date range. Critical for evaluating international manufacturing sites and supply chain risk. Related: fda_get_facility (facility details by FEI), fda_inspections (inspection history by FEI).
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Return the full list of currently unreliable Czech VAT payers from ADIS. WARNING: response can be 50–100 MB (tens of thousands of entries). Intended for daily mirroring into a local database, not for ad-hoc inspection. For "is this specific company unreliable?" use check_dph_payer instead.
    Connector
  • Update the bonus entries value for a participant in a sweepstakes. This overwrites the current value. Use get_participant first to check current bonus entries. # update_bonus_entries ## When to use Update the bonus entries value for a participant in a sweepstakes. This overwrites the current value. Use get_participant first to check current bonus entries. ## Pre-calls required 1. fetch_sweepstakes if the user gave you a sweepstakes name instead of a token ## Parameters to validate before calling - sweepstakes_token (string, required) — The sweepstakes token (UUID format) - participant_token (string, required) — The participant token (UUID format) - bonus_entries (integer, required) — range: 0–1000000 — New bonus entries value (0-1000000). This overwrites the current value.
    Connector
  • Full data pull for a UK property in one call. Returns sale history, area comps, EPC rating, rental market listings, current sales market listings, rental yield calculation, and price range from area median. Requires a street address + postcode for subject property identification. Postcode-only (e.g. "NG1 2NS") returns area-level data without a subject property — use property_comps or property_yield for postcode-only queries.
    Connector
  • Create multiple works at once (up to 50). TRIGGER: User pastes a spreadsheet, list, CSV, or describes multiple works. "I have a bunch of works," "here's my inventory." Extract all data you can — titles, media, dates, dimensions, series. Present a summary and wait for confirmation. If the user has a CSV or spreadsheet file, direct them to raisonn.ai/import instead. artist_id from get_profile — never ask the user. After success, ask if they'd like to see any of the works — then call get_work to show the visual card.
    Connector
  • Parse unstructured provenance text into structured entries. Read-only — does NOT save. TRIGGER: User pastes provenance text, gallery records, auction history, or says "here's the provenance," "this passed through Gagosian." Flow: parse_provenance → present results for review → save_provenance. Never skip the review step. After confirmation, call save_provenance to write entries to the catalogue. YOU (the connected AI) do the parsing — this tool does not call a server. Parse the text into structured entries and pass them as the `entries` parameter. Each entry needs at minimum: holder_name. Also extract: holder_type (individual/gallery/museum/auction_house/institution/private_collection/foundation/unknown), holder_city, holder_country, event_type (sold/gifted/bequeathed/consigned/loaned/deposited/returned/exchanged/inherited/seized/found/transferred/created/unknown), date_start_year, date_start_display (human-readable, e.g. "ca. 1960", "by 1965"), date_end_year, date_end_display, date_start_approximate/date_end_approximate (boolean), certainty_level (certain/probable/possible), gap_before (boolean if gap in chain), and notes. Return entries in chronological order (earliest first).
    Connector
  • Find recent comparable property sales and rental comps near a property. USE WHEN: user asks 'what are comps in this area', 'recent sales near here', 'what did similar houses sell for', 'price per square foot', 'market value estimate', 'rental comps', or needs comparable sales data. RETURNS: subject property AVM, list of recent sales with price, sqft, price/sqft, distance, beds/baths, and rental comps with rent amounts. Also includes local market stats. Useful for investor deal evaluation, CMA, and market analysis.
    Connector
  • Create a database user for a Cloud SQL instance. * This tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes. * When you use the `create_user` tool, specify the type of user: `CLOUD_IAM_USER` or `CLOUD_IAM_SERVICE_ACCOUNT`. * By default the newly created user is assigned the `cloudsqlsuperuser` role, unless you specify other database roles explicitly in the request. * You can use a newly created user with the `execute_sql` tool if the user is a currently logged in IAM user. The `execute_sql` tool executes the SQL statements using the privileges of the database user logged in using IAM database authentication. The `create_user` tool has the following limitations: * To create a built-in user with password, use the `password_secret_version` field to provide password using the Google Cloud Secret Manager. The value of `password_secret_version` should be the resource name of the secret version, like `projects/12345/locations/us-central1/secrets/my-password-secret/versions/1` or `projects/12345/locations/us-central1/secrets/my-password-secret/versions/latest`. The caller needs to have `secretmanager.secretVersions.access` permission on the secret version. This feature is available only to projects on an allowlist. * The `create_user` tool doesn't support creating a user for SQL Server. To create an IAM user in PostgreSQL: * The database username must be the IAM user's email address and all lowercase. For example, to create user for PostgreSQL IAM user `example-user@example.com`, you can use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance":"test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user@example.com`. To create an IAM service account in PostgreSQL: * The database username must be created without the `.gserviceaccount.com` suffix even though the full email address for the account is`service-account-name@project-id.iam.gserviceaccount.com`. For example, to create an IAM service account for PostgreSQL you can use the following request format: ``` { "name": "test@test-project.iam", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `test@test-project.iam`. To create an IAM user or IAM service account in MySQL: * When Cloud SQL for MySQL stores a username, it truncates the @ and the domain name from the user or service account's email address. For example, `example-user@example.com` becomes `example-user`. * For this reason, you can't add two IAM users or service accounts with the same username but different domain names to the same Cloud SQL instance. * For example, to create user for the MySQL IAM user `example-user@example.com`, use the following request: ``` { "name": "example-user@example.com", "type": "CLOUD_IAM_USER", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM user is `example-user`. * For example, to create the MySQL IAM service account `service-account-name@project-id.iam.gserviceaccount.com`, use the following request: ``` { "name": "service-account-name@project-id.iam.gserviceaccount.com", "type": "CLOUD_IAM_SERVICE_ACCOUNT", "instance": "test-instance", "project": "test-project" } ``` The created database username for the IAM service account is `service-account-name`.
    Connector
  • Compile ExoQuery Kotlin code and EXECUTE it against an Sqlite database with provided schema. ExoQuery is a compile-time SQL query builder that translates Kotlin DSL expressions into SQL. WHEN TO USE: When you need to verify ExoQuery produces correct results against actual data. INPUT REQUIREMENTS: - Complete Kotlin code (same requirements as validateExoquery) - SQL schema with CREATE TABLE and INSERT statements for test data - Data classes MUST exactly match the schema table structure - Column names in data classes must match schema (use @SerialName for snake_case columns) - Must include or or more .runSample() calls in main() to trigger SQL generation and execution (note that .runSample() is NOT or real production use, use .runOn(database) instead) OUTPUT FORMAT: Returns one or more JSON objects, each on its own line. Each object can be: 1. SQL with output (query executed successfully): {"sql": "SELECT u.name FROM \"User\" u", "output": "[(name=Alice), (name=Bob)]"} 2. Output only (e.g., print statements, intermediate results): {"output": "Before: [(id=1, title=Ion Blend Beans)]"} 3. Error output (runtime errors, exceptions): {"outputErr": "java.sql.SQLException: Table \"USERS\" not found"} Multiple results appear when code has multiple queries or print statements: {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25)]"} {"output": "Before:"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": "Rows affected: 1"} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans, unit_price=32.00, in_stock=25), (id=2, title=Luna Fuel Flask, unit_price=89.50, in_stock=6)]"} Compilation errors return the same format as validateExoquery: { "errors": { "File.kt": [ { "interval": {"start": {"line": 12, "ch": 10}, "end": {"line": 12, "ch": 15}}, "message": "Type mismatch: inferred type is String but Int was expected", "severity": "ERROR", "className": "ERROR" } ] } } Runtime Errors can have the following format: { "errors" : { "File.kt" : [ ] }, "exception" : { "message" : "[SQLITE_ERROR] SQL error or missing database (no such table: User)", "fullName" : "org.sqlite.SQLiteException", "stackTrace" : [ { "className" : "org.sqlite.core.DB", "methodName" : "newSQLException", "fileName" : "DB.java", "lineNumber" : 1179 }, ...] }, "text" : "<outStream><outputObject>\n{\"sql\": \"SELECT x.id, x.name, x.age FROM User x\"}\n</outputObject>\n</outStream>" } If there was a SQL query generated before the error, it will appear in the "text" field output stream. EXAMPLE INPUT CODE: ```kotlin import io.exoquery.* import kotlinx.serialization.Serializable import kotlinx.serialization.SerialName @Serializable data class User(val id: Int, val name: String, val age: Int) @Serializable data class Order(val id: Int, @SerialName("user_id") val userId: Int, val total: Int) val userOrders = sql.select { val u = from(Table<User>()) val o = join(Table<Order>()) { o -> o.userId == u.id } Triple(u.name, o.total, u.age) } fun main() = userOrders.buildPrettyFor.Sqlite().runSample() ``` EXAMPLE INPUT SCHEMA: ```sql CREATE TABLE "User" (id INT, name VARCHAR(100), age INT); CREATE TABLE "Order" (id INT, user_id INT, total INT); INSERT INTO "User" (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25); INSERT INTO "Order" (id, user_id, total) VALUES (1, 1, 100), (2, 1, 200), (3, 2, 150); ``` EXAMPLE SUCCESS OUTPUT: {"sql": "SELECT u.name AS first, o.total AS second, u.age AS third FROM \"User\" u INNER JOIN \"Order\" o ON o.user_id = u.id", "output": "[(first=Alice, second=100, third=30), (first=Alice, second=200, third=30), (first=Bob, second=150, third=25)]"} EXAMPLE WITH MULTIPLE OPERATIONS (insert with before/after check): {"output": "Before:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans)]"} {"sql": "INSERT INTO \"InventoryItem\" (title, unit_price, in_stock) VALUES (?, ?, ?)", "output": ""} {"output": "After:"} {"sql": "SELECT * FROM \"InventoryItem\"", "output": "[(id=1, title=Ion Blend Beans), (id=2, title=Luna Fuel Flask)]"} EXAMPLE RUNTIME ERROR (if a user divided by zero): {"outputErr": "Exception in thread "main" java.lang.ArithmeticException: / by zero"} KEY PATTERNS: (See validateExoquery for complete pattern reference) Summary of most common patterns: - Filter: sql { Table<T>().filter { x -> x.field == value } } - Select: sql.select { val x = from(Table<T>()); where { ... }; x } - Join: sql.select { val a = from(Table<A>()); val b = join(Table<B>()) { b -> b.aId == a.id }; Pair(a, b) } - Left join: joinLeft(Table<T>()) { ... } returns nullable - Insert: sql { insert<T> { setParams(obj).excluding(id) } } - Update: sql { update<T>().set { it.field to value }.where { it.id == x } } - Delete: sql { delete<T>().where { it.id == x } } SCHEMA RULES: - Table names should match data class names (case-sensitive, use quotes for exact match) - Column names must match @SerialName values or property names - Include realistic test data to verify query logic - Sqlite database syntax (mostly compatible with standard SQL) COMMON PATTERNS: - JSON columns: Use VARCHAR for storage, @SqlJsonValue on the nested data class - Auto-increment IDs: Use INTEGER PRIMARY KEY - Nullable columns: Use Type? in Kotlin, allow NULL in schema
    Connector
  • Retrieve shipment volume by sales channel (e.g. Shopify, WooCommerce, API, manual). Returns `total_shipments_count` and `channels` array (each with `name`, `shipments_count`, `percentage`). Use for: "Which sales channel has the most shipments?", "Show me channel breakdown." **Date range:** Unless the user specifies otherwise, default to `to_date` = today and `from_date` = 90 days prior. Required authorization scope: `public.analytics:read` Args: from_date: Start date in YYYY-MM-DD format. Default to 90 days before to_date if user doesn't specify. to_date: End date in YYYY-MM-DD format. Default to today if user doesn't specify. Returns: Sales channels with shipment counts and percentage of total volume.
    Connector
  • Get the Slidev syntax guide: how to write slides in markdown. Returns the official Slidev syntax reference (frontmatter, slide separators, speaker notes, layouts, code blocks) plus built-in layout documentation and an example deck. Call this once to learn how to write Slidev presentations.
    Connector
  • Create a checkout URL for one or more products. Pass variant IDs (items) and/or product URLs (product_urls). When a product URL is provided (e.g. https://laluer.com/products/mira), the tool resolves it to a variant ID automatically — no catalog import needed. Supports discount codes, cart notes, and selling plans. Do not use unless the user wants to buy — use search_products or skincare_recommend first. Returns a direct Shopify checkout link the user can click to buy.
    Connector
  • Create a checkout URL for one or more products. Pass variant IDs (items) and/or product URLs (product_urls). When a product URL is provided (e.g. https://laluer.com/products/mira), the tool resolves it to a variant ID automatically — no catalog import needed. Supports discount codes, cart notes, and selling plans. Do not use unless the user wants to buy — use search_products or skincare_recommend first. Returns a direct Shopify checkout link the user can click to buy.
    Connector