Skip to main content
Glama
127,309 tools. Last updated 2026-05-05 13:48

"How to create, read from, and write to a database" matching MCP tools:

  • Atomically rotate an API key. Old key is immediately invalidated. Creates a new key with the same name, scopes, and rate limits. The new key is returned once — store it immediately. Requires: API key with write scope. Args: key_id: UUID of the API key to rotate (get from whoami()) Returns: {"api_key": "bh_...", "key_id": "uuid", "prefix": "bh_...", "scopes": ["read", "write"], "message": "Key rotated. Store securely."} Note: The old key stops working immediately. Update BOREALHOST_API_KEY right away.
    Connector
  • Read an agent's STRAT config (the parameters its tower floor runs on). WHAT IT DOES: GETs /v1/agents/:agent_wallet/config. Public read — anyone can audit any agent's strategy. The returned `version` is the CAS token you pass to agent_equip_set as `expected_version` on the next write. WHEN TO USE: before agent_equip_set (to compute the next expected_version), or just to inspect what a competitor's floor is configured to do. RETURNS: AgentConfig — { agent_wallet, version, updated_at, updated_by, config: { strategy, max_bid_raw, cooldown_sec, aggression_bps, custom } }. FAILURE MODES: equip_get_failed (404) — agent has never written a config; treat the version baseline as 0 on the first write. RELATED: agent_equip_set (write), agent_operators_list (who can write).
    Connector
  • DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. # delete_file ## When to use DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool. ## Parameters to validate before calling - file_token (string, required) — The file token (UUID) of the file to delete. Get via fetch_files. ## Notes - DESTRUCTIVE — IRREVERSIBLE. Always confirm with the user before calling. Explain what will be lost.
    Connector
  • Get a human's FULL profile including contact info (email, Telegram, Signal), crypto wallets, fiat payment methods (PayPal, Venmo, etc.), and social links. Requires agent_key from register_agent. Rate limited: PRO = 50/day. Alternative: $0.05 via x402. Use this before create_job_offer to see how to pay the human. The human_id comes from search_humans results.
    Connector
  • Retrieves authoritative documentation directly from the framework's official repository. ## When to Use **Called during i18n_checklist Steps 1-13.** The checklist tool coordinates when you need framework documentation. Each step will tell you if you need to fetch docs and which sections to read. If you're implementing i18n: Let the checklist guide you. Don't call this independently ## Why This Matters Your training data is a snapshot. Framework APIs evolve. The fetched documentation reflects the current state of the framework the user is actually running. Following official docs ensures you're working with the framework, not against it. ## How to Use **Two-Phase Workflow:** 1. **Discovery** - Call with action="index" to see available sections 2. **Reading** - Call with action="read" and section_id to get full content **Parameters:** - framework: Use the exact value from get_project_context output - version: Use "latest" unless you need version-specific docs - action: "index" or "read" - section_id: Required for action="read", format "fileIndex:headingIndex" (from index) **Example Flow:** ``` // See what's available get_framework_docs(framework="nextjs-app-router", action="index") // Read specific section get_framework_docs(framework="nextjs-app-router", action="read", section_id="0:2") ``` ## What You Get - **Index**: Table of contents with section IDs - **Read**: Full section with explanations and code examples Use these patterns directly in your implementation.
    Connector
  • 👤 Get full profile for a contact: all channel identities, notes, role, capabilities, birthday. When to use: - After contacts.find to get complete info about a specific person - To see all channels a contact is reachable on - To read notes, role, or capabilities for a contact Requires contact_id (entity_id) from contacts.find.
    Connector

Matching MCP Servers

  • A
    license
    A
    quality
    B
    maintenance
    Converts AI Skills (following Claude Skills format) into MCP server resources, enabling LLM applications to discover, access, and utilize self-contained skill directories through the Model Context Protocol. Provides tools to list available skills, retrieve skill details and content, and read supporting files with security protections.
    Last updated
    3
    24
    Apache 2.0

Matching MCP Connectors

  • Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.

  • Daily world briefing that tells AI assistants what's actually happening right now. Leaders, conflicts, deaths, economic data, holidays. Updated daily so they stop getting current events wrong.

  • Register a new agent account and get an API key. No authentication needed. The returned API key grants read+write access to all BorealHost API endpoints. Store it securely — it cannot be retrieved again. The key is automatically activated for this session — all subsequent tool calls will use it. No extra configuration needed. If no email is provided, a synthetic agent identity is created (agent-{uuid}@api.borealhost.ai). If an email is provided, it links to an existing or new human account. Args: name: Human-readable name for this API key (default: "Agent Key") email: Optional email to link to a human account Returns: {"api_key": "bh_...", "key_id": "uuid", "prefix": "bh_...", "scopes": ["read", "write"], "account_id": "uuid", "message": "Store this API key securely..."} Errors: RATE_LIMITED: Max 5 registrations per IP per hour VALIDATION_ERROR: Invalid email format
    Connector
  • Claim an API key using a claim token from the container. After calling request_api_key(), read the claim token from ~/.borealhost/.claim_token on your container and pass it here. The token is single-use — once claimed, it cannot be used again. The API key is automatically activated for this MCP session. Args: claim_token: The claim token string read from the container file Returns: {"api_key": "bh_...", "key_prefix": "bh_...", "site_slug": "my-site", "scopes": ["read", "write"], "message": "API key created and activated..."} Errors: VALIDATION_ERROR: Invalid, expired, or already-claimed token
    Connector
  • Read an agent's STRAT config (the parameters its tower floor runs on). WHAT IT DOES: GETs /v1/agents/:agent_wallet/config. Public read — anyone can audit any agent's strategy. The returned `version` is the CAS token you pass to agent_equip_set as `expected_version` on the next write. WHEN TO USE: before agent_equip_set (to compute the next expected_version), or just to inspect what a competitor's floor is configured to do. RETURNS: AgentConfig — { agent_wallet, version, updated_at, updated_by, config: { strategy, max_bid_raw, cooldown_sec, aggression_bps, custom } }. FAILURE MODES: equip_get_failed (404) — agent has never written a config; treat the version baseline as 0 on the first write. RELATED: agent_equip_set (write), agent_operators_list (who can write).
    Connector
  • ⚡ CALL THIS TOOL FIRST IN EVERY NEW CONVERSATION ⚡ Loads your personality configuration and user preferences for this session. This is how you learn WHO you are and HOW the user wants you to behave. Returns your awakening briefing containing: - Your persona identity (who you are) - Your voice style (how to communicate) - Custom instructions from the user - Quirks and boundaries to follow IMPORTANT: Call this at the START of every conversation before doing anything else. This ensures you have context about the user and their preferences before responding. Example: >>> await awaken() {'success': True, 'briefing': '=== AWAKENING BRIEFING ===...'}
    Connector
  • Delete a cron job by line number. Get line numbers from list_cron(). Requires: API key with write scope. Args: slug: Site identifier line_number: Line number of the cron entry to delete Returns: {"deleted": true}
    Connector
  • Get overall database statistics: total counts of suppliers, fabrics, clusters, and links. USE WHEN user asks: - "how big is your database" / "what's the coverage" / "data overview" - "how many suppliers / fabrics / clusters do you have" - "database size / scale / freshness" - "is the data up to date" - "live counts for MRC data" - "first-time onboarding: 'what can MRC data do for me'" - "数据库多大 / 有多少数据 / 覆盖多少供应商" - "你们的数据规模 / 数据量 / 新鲜度" WORKFLOW: Standalone discovery tool — call this first when a user asks about data scale or freshness. Follow with get_product_categories or get_province_distribution for deeper segment coverage, or with search_suppliers/search_fabrics/search_clusters to drill in. DIFFERENCE from database-overview resource (mrc://overview): This is dynamic (live counts + generated_at). The resource is static (geographic scope, top provinces, data standards). RETURNS: { database, generated_at, tables: { suppliers: { total }, fabrics: { total }, clusters: { total }, supplier_fabrics: { total } }, attribution } EXAMPLES: • User: "How big is the MRC database?" → get_stats({}) • User: "Give me the latest data scale numbers" → get_stats({}) • User: "MRC 数据库有多少供应商和面料" → get_stats({}) ERRORS & SELF-CORRECTION: • All counts 0 → database query failed or D1 binding lost. Retry once after 5 seconds. If still 0, surface a transport error to user. • Rate limit 429 → wait 60 seconds; do not retry immediately. AVOID: Do not call this before every tool — only when user explicitly asks about scale. Do not call to get per-category counts — use get_product_categories. Do not call to get geographic scope metadata — use the database-overview resource (mrc://overview) which is static. NOTE: Only reports verified + partially_verified records. Unverified reserve data is excluded from counts. Source: MRC Data (meacheal.ai). 中文:获取数据库整体统计(供应商总数、面料总数、产业带总数、关联记录数)。动态快照,含生成时间戳。
    Connector
  • Check the current API key's account info, scopes, and site count. Requires: BOREALHOST_API_KEY env var (read scope). Returns: {"user": {"id": "uuid", "email": "...", "date_joined": "iso8601"}, "api_key": {"id": "uuid", "name": "...", "prefix": "bh_...", "scopes": ["read", "write"], "created_at": "iso8601"}, "account": {"sites": 2, "active_subscriptions": 1}} Errors: UNAUTHORIZED: Missing or invalid API key
    Connector
  • Import data into a Cloud SQL instance. If the file doesn't start with `gs://`, then the assumption is that the file is stored locally. If the file is local, then the file must be uploaded to Cloud Storage before you can make the actual `import_data` call. To upload the file to Cloud Storage, you can use the `gcloud` or `gsutil` commands. Before you upload the file to Cloud Storage, consider whether you want to use an existing bucket or create a new bucket in the provided project. After the file is uploaded to Cloud Storage, the instance service account must have sufficient permissions to read the uploaded file from the Cloud Storage bucket. This can be accomplished as follows: 1. Use the `get_instance` tool to get the email address of the instance service account. From the output of the tool, get the value of the `serviceAccountEmailAddress` field. 2. Grant the instance service account the `storage.objectAdmin` role on the provided Cloud Storage bucket. Use a command like `gcloud storage buckets add-iam-policy-binding` or a request to the Cloud Storage API. It can take from two to up to seven minutes or more for the role to be granted and the permissions to be propagated to the service account in Cloud Storage. If you encounter a permissions error after updatingthe IAM policy, then wait a few minutes and try again. After permissions are granted, you can import the data. We recommend that you leave optional parameters empty and use the system defaults. The file type can typically be determined by the file extension. For example, if the file is a SQL file, `.sql` or `.csv` for CSV file. The following is a sample SQL `importContext` for MySQL. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL" } ``` There is no `database` parameter present for MySQL since the database name is expected to be present in the SQL file. Specify only one URI. No other fields are required outside of `importContext`. For PostgreSQL, the `database` field is required. The following is a sample PostgreSQL `importContext` with the `database` field specified. ``` { "uri": "gs://sample-gcs-bucket/sample-file.sql", "kind": "sql#importContext", "fileType": "SQL", "database": "sample-db" } ``` The `import_data` tool returns a long-running operation. Use the `get_operation` tool to poll its status until the operation completes.
    Connector
  • Search and replace in WordPress database (e.g. URL migration). Handles serialized data safely. Use dry_run=true first to preview changes. Requires: API key with write scope. Args: slug: Site identifier old: String to search for (e.g. "http://old-domain.com") new: Replacement string (e.g. "https://new-domain.com") dry_run: Preview only without making changes (default: true) Returns: {"replacements": 42, "tables_affected": 5, "dry_run": true}
    Connector
  • Create a new API key with specified scopes. Cannot create keys with higher scopes than the current key. Site-scoped keys restrict access to a single site. Requires: API key with write scope. Args: name: Human-readable name for the key (1-100 chars) scopes: Comma-separated scopes. Options: "read", "read,write", "read,write,admin". Default: "read" site_slug: Optional — restrict the key to a single site. Omit for account-wide access. Returns: {"api_key": "bh_...", "key_id": "uuid", "prefix": "bh_...", "name": "My Key", "scopes": ["read", "write"], "message": "Store this API key securely — it will not be shown again."} Errors: VALIDATION_ERROR: Invalid name, scopes, or max 25 active keys FORBIDDEN: Cannot create keys with higher scopes than current key
    Connector
  • Return a table surface's column definitions so an agent knows what keys create_row/update_row will accept. Each column has `key` (the field name in row.data), `label` (human-readable), `type` (text | longtext | url | status | owner | date | number), `position`, and, for status/owner columns, the allowed `options`. Empty array on doc-only workspaces; callers should still be able to write rows (columns auto-seed on first write). Multi-surface workspaces accept `surface_slug` to scope to a specific table sheet (use `list_surfaces` to enumerate); omit to fall through to the workspace's primary table surface.
    Connector
  • Read an agent's operator whitelist (who can write configs on its behalf). WHAT IT DOES: GETs /v1/agents/:agent_wallet/operators. Public read. WHEN TO USE: before agent_equip_set (confirm the signer wallet is on the list), or to audit who else has write access to a competitor's config. RETURNS: { agent_wallet, owner, operators: [{ wallet, role: 'owner'|'operator', added_at, added_by }], count }. RELATED: agent_operators_set (mutate — owner-only), agent_equip_set (operators may write configs but not modify this list).
    Connector
  • Create a manual backup (runs asynchronously). The backup starts in the background. Poll list_backups() to check status. Requires: API key with write scope. Args: slug: Site identifier Returns: {"id": "uuid", "status": "pending", "message": "Backup started. Poll list_backups() to check status."}
    Connector
  • Return the primary image URL and current metadata for a work, so you can visually analyze the image yourself and propose structured catalogue fields. Use this when the artist asks you to read a work you uploaded, or when beat 2 of the add-work flow surfaced thin hints. The image URL is publicly accessible (Supabase Storage public bucket); fetch it and inspect the image directly with your vision capabilities. Fields you can honestly improve from a visual read: medium (paint vs. print vs. sculpture material vs. digital), classification (painting / sculpture / drawing / photography / time-based / software / installation / performance), visible signature or inscription (transcribe verbatim, note position), date visible in the work itself (distinct from EXIF), description (brief factual read of subject matter), dimensions if a scale reference is in frame. Fields to leave alone unless visible: dimensions without scale (cannot be honestly estimated from a flat photo), attribution, provenance, exhibition history — those come from records, not the image. Flow: (1) call this tool; (2) fetch + read the image; (3) present your proposals to the artist with per-field reasoning; (4) on confirmation, call update_work with the accepted patches. Do not write without confirmation. Resolve the work by workId (UUID) or uwi (e.g. "RAI-2026-00417"). Use search_natural_language to find workId — never ask the user.
    Connector