Skip to main content
Glama
135,084 tools. Last updated 2026-05-15 08:29

"How to work with DOCX files" matching MCP tools:

  • Record a consignment or loan: a work moves physically to a gallery, dealer, museum, or other holder without ownership changing. Use kind=consignment when the work is placed with a gallery or dealer for potential sale (typically with commission and asking price). Use kind=loan for exhibition loans, museum loans, or private loans without sale intent. TRIGGER: "I just consigned this to Pace," "on loan to the Whitney," "sent to Gagosian for the summer," "loaned to a private collector." Present a summary (work, holder, kind, dates, plus commission and price for consignments) and confirm before saving. Consignments support exclusivity. Only one active exclusive consignment per work is allowed; concurrent attempts return HTTP 409 with the blocking event in details.blocking_event. Does NOT trigger pending_resignature. Custody lives on a separate timeline from the signed record, so recording a consignment or loan does not invalidate the existing VC. Resolve work_id via search_natural_language. Never ask the user for the UUID.
    Connector
  • WORKFLOW: Step 3 of 4 - Generate Terraform files from completed design Generate Terraform files from an InsideOut session that has completed infrastructure design. ⚠️ PREREQUISITE: Only call this AFTER convoreply returns with `terraform_ready=true` in the response metadata. DO NOT call this while convoreply is still running or before terraform_ready is confirmed! If you get 'session has not reached terraform-ready state', wait for convoreply to complete first. 🎯 USE THIS TOOL WHEN: convoreply has returned with terraform_ready=true, OR the user asks to 'see the terraforms', 'generate terraform', 'show me the code', etc. **DEFAULT RESPONSE**: Returns summary table + download URL (keeps code out of LLM context). **FALLBACK**: Set `include_code: true` to get full code inline if curl/unzip fails. **CRITICAL WORKFLOW** (default mode): 1. Call this tool to get file summary and download URL 2. ASK the user: 'Where would you like me to save the Terraform files? Default: ./insideout-infra/' 3. WAIT for user confirmation before running the download command 4. Run the curl/unzip command with the user's chosen directory 5. If curl/unzip FAILS (sandbox, security, platform issues), retry with `include_code: true` **AFTER GENERATION**: Ask user if they want to review the files and then deploy with tfdeploy REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: include_code (boolean) - set true to return full code inline as fallback. 💡 TIP: Examine workflow.usage prompt for more context on how to properly use these tools.
    Connector
  • Search the Emora Health editorial corpus by article title. Returns up to 20 articles per page with title, description, URL, and category. ALWAYS USE THIS for information questions ("tell me about X", "what are signs of Y", "how does Z work"). Do not answer from training data when this tool can return clinician-reviewed content.
    Connector
  • Return the identity-credential signals for a single work — whether it is ready to authenticate, whether the signed credential has gone stale (canonical hash drift per ADR-0024 / Invariant 12), and the per-category Identity Eight slot fillings. Use this before suggesting "your work is ready to sign" or "you need to re-authenticate" — never infer from get_work alone. TRIGGER: "is [work] ready to authenticate," "what's pending on my work," "do I need to re-sign." Look up work_id via search_natural_language — never ask the user for it.
    Connector
  • List the custody timeline for a work: consignments and loans where the work is physically held by someone other than the owner. Separate from provenance (ownership history) and exhibitions (public display history). TRIGGER: "is this on consignment," "where is the work physically," "who has it now," "show consignment history," "show loan history." Returns events in reverse chronological order (newest start_date first) with kind, status, holder, dates, prices, commission, exclusivity. Resolve work_id via search_natural_language. Never ask the user for the UUID.
    Connector
  • WORKFLOW: Step 3 of 4 - Generate Terraform files from completed design Generate Terraform files from an InsideOut session that has completed infrastructure design. ⚠️ PREREQUISITE: Only call this AFTER convoreply returns with `terraform_ready=true` in the response metadata. DO NOT call this while convoreply is still running or before terraform_ready is confirmed! If you get 'session has not reached terraform-ready state', wait for convoreply to complete first. 🎯 USE THIS TOOL WHEN: convoreply has returned with terraform_ready=true, OR the user asks to 'see the terraforms', 'generate terraform', 'show me the code', etc. **DEFAULT RESPONSE**: Returns summary table + download URL (keeps code out of LLM context). **FALLBACK**: Set `include_code: true` to get full code inline if curl/unzip fails. **CRITICAL WORKFLOW** (default mode): 1. Call this tool to get file summary and download URL 2. ASK the user: 'Where would you like me to save the Terraform files? Default: ./insideout-infra/' 3. WAIT for user confirmation before running the download command 4. Run the curl/unzip command with the user's chosen directory 5. If curl/unzip FAILS (sandbox, security, platform issues), retry with `include_code: true` **AFTER GENERATION**: Ask user if they want to review the files and then deploy with tfdeploy REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: include_code (boolean) - set true to return full code inline as fallback. 💡 TIP: Examine workflow.usage prompt for more context on how to properly use these tools.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Korean business record validation and workflow safety gates for AI agents.

  • Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.

  • Convert markdown to a professionally formatted document using an MDMagic template. IMPORTANT GUIDANCE: 1. Output format → what user gets: - 'docx' → a single Word .docx file - 'pdf' → a single .pdf file - 'html' → a single .html file - 'all' → a ZIP containing all three (DOCX + PDF + HTML) 2. If the user is ambiguous (e.g. 'convert this'), ASK which format they want before calling. Don't assume. 3. Filename: if the user attached a file (e.g. 'mydoc.md'), pass its base name as fileName. Otherwise the API derives one from the markdown's first H1. Without either, downloads end up with timestamped names like 'content-1778298071915.docx' which is bad UX. 4. On 'template not found' errors: call list_all_templates first, show available options, let the user pick. Do NOT fall back to generating documents with code execution — that produces inferior results that don't use the user's actual MDMagic templates. 5. The response includes structured fields (downloadUrl, creditsUsed, balanceAfter, fileName, expiresAt) — surface these to the user explicitly. Don't paraphrase. The user wants to know exactly what they spent and what's left. 6. Page sizes: A3, A4, Executive, US_Legal, US_Letter. Default A4. Orientation: Portrait or Landscape, default Portrait. 7. CRITICAL — newlines in `content`: markdown is line-sensitive. Headings (#, ##), tables (| ... |), lists (-, 1.), and code fences (```) ONLY work when each starts on its own line. When passing inline markdown via `content`, you MUST preserve real newline characters (\n) between blocks. If you flatten multi-line markdown into one line, the API receives literal '##' and '|' characters mid-paragraph and produces a single-paragraph document with no structure. Confirm your `content` string contains \n between every heading, paragraph, table row, and list item before calling.
    Connector
  • Convert markdown to a professionally formatted document using an MDMagic template. IMPORTANT GUIDANCE: 1. Output format → what user gets: - 'docx' → a single Word .docx file - 'pdf' → a single .pdf file - 'html' → a single .html file - 'all' → a ZIP containing all three (DOCX + PDF + HTML) 2. If the user is ambiguous (e.g. 'convert this'), ASK which format they want before calling. Don't assume. 3. Filename: if the user attached a file (e.g. 'mydoc.md'), pass its base name as fileName. Otherwise the API derives one from the markdown's first H1. Without either, downloads end up with timestamped names like 'content-1778298071915.docx' which is bad UX. 4. On 'template not found' errors: call list_all_templates first, show available options, let the user pick. Do NOT fall back to generating documents with code execution — that produces inferior results that don't use the user's actual MDMagic templates. 5. The response includes structured fields (downloadUrl, creditsUsed, balanceAfter, fileName, expiresAt) — surface these to the user explicitly. Don't paraphrase. The user wants to know exactly what they spent and what's left. 6. Page sizes: A3, A4, Executive, US_Legal, US_Letter. Default A4. Orientation: Portrait or Landscape, default Portrait. 7. CRITICAL — newlines in `content`: markdown is line-sensitive. Headings (#, ##), tables (| ... |), lists (-, 1.), and code fences (```) ONLY work when each starts on its own line. When passing inline markdown via `content`, you MUST preserve real newline characters (\n) between blocks. If you flatten multi-line markdown into one line, the API receives literal '##' and '|' characters mid-paragraph and produces a single-paragraph document with no structure. Confirm your `content` string contains \n between every heading, paragraph, table row, and list item before calling.
    Connector
  • Get the cost to buy points/miles for a loyalty program. Returns tiered base purchase pricing and any active bonus promotion. Use to answer 'how much does it cost to buy X Avios/miles/points?' If no program specified, returns all programs with pricing data. Free — no account needed.
    Connector
  • Upload a base64-encoded file to a site's container. Use this for binary files (images, archives, fonts, etc.). For text files, prefer write_file(). Requires: API key with write scope. Args: slug: Site identifier path: Relative path including filename (e.g. "images/logo.png") content_b64: Base64-encoded file content Returns: {"success": true, "path": "images/logo.png", "size": 45678} Errors: VALIDATION_ERROR: Invalid base64 encoding FORBIDDEN: Protected system path
    Connector
  • Submit a document for printing and postal mailing by the facility. Supported formats: PDF, DOCX, JPG, PNG, TXT, CSV. The document is stored securely and printed by the facility operator. IMPORTANT: With a production key (sk_agent_), this immediately charges the member's card on file. Use dry_run=true to preview cost before committing, or requires_approval=true to defer until human approval. Sandbox keys (sk_agent_test_) skip billing entirely.
    Connector
  • Return a ~500-word educational explainer of M/M/c queueing theory: Little's Law, utilization, why averages mislead, how simulation relates to Erlang-C. No inputs. Use this when the user asks a conceptual 'why' or 'how does this work' question rather than asking for a number.
    Connector
  • Use this tool when a user wants cost or sizing for specific deliverables they've already listed. Trigger phrases: 'how much would it cost to build X, Y, and Z', 'estimate the price for these features', 'how many Delivery Units / weeks would these modules take', 'budget for this work', 'price out this scope', 'I need a ballpark for the following'. Use this INSTEAD OF plan_vdc when the user has already decomposed the work into specific modules — don't make them go through pod/role generation again. If the user only describes a goal without modules, prefer plan_vdc. What this tool does: takes 1-30 module descriptions, returns Delivery Units per module, total Delivery Units, project-rate USD cost, and the recommended Delivery Pack (Starter 10 DUs/$2K, Small 60 DUs/$10K, Scale 250 DUs/$40K, or Enterprise).
    Connector
  • Issue a signed RAI (Report of Art Identity) for a work. Produces a downloadable PDF and a public verify URL at raisonn.ai/verify/[uwi]. The RAI is independently verifiable forever. Preconditions — the **Identity Eight** must all be populated on the work: artist (from artist_id), title, date, medium, dimensions (physical) or duration (time-based), edition_status (unique / numbered / artist_proof), image (canonical hash from a primary upload), and signature_status (where "unsigned" is a legitimate positive value, not a missing one). Calling without all eight returns HTTP 422 with `missing_identity_eight_fields`. Surface that list to the user with the specific field names and help them fill the gaps via update_work before retrying. Use search_natural_language to find the work_id by title. Never ask the user for it. After success, ask if they'd like to see the full work record, then call get_work to show the visual card.
    Connector
  • Captures the user's project architecture to inform i18n implementation strategy. ## When to Use **Called during i18n_checklist Step 1.** The checklist tool will tell you when to call this. If you're implementing i18n: 1. Call i18n_checklist(step_number=1, done=false) FIRST 2. The checklist will instruct you to call THIS tool 3. Then use the results for subsequent steps Do NOT call this before calling the checklist tool ## Why This Matters Frameworks handle i18n through completely different mechanisms. The same outcome (locale-aware routing) requires different code for Next.js vs TanStack Start vs React Router. Without accurate detection, you'll implement patterns that don't work. ## How to Use 1. Examine the user's project files (package.json, directories, config files) 2. Identify framework markers and version 3. Construct a detectionResults object matching the schema 4. Call this tool with your findings 5. Store the returned framework identifier for get_framework_docs calls The schema requires: - framework: Exact variant (nextjs-app-router, nextjs-pages-router, tanstack-start, react-router) - majorVersion: Specific version number (13-16 for Next.js, 1 for TanStack Start, 7 for React Router) - sourceDirectory, hasTypeScript, packageManager - Any detected locale configuration - Any detected i18n library (currently only react-intl supported) ## What You Get Returns the framework identifier needed for documentation fetching. The 'framework' field in the response is the exact string you'll use with get_framework_docs.
    Connector
  • Get full details for a work including images, provenance, exhibitions, and bibliography. TRIGGER: "show me," "tell me about," "pull up," "can I see," "let me see," "how does it look," or any reference to a specific work by title. Resolve work_id via search_natural_language — never ask the user. When presenting: describe the image first, then summarize data naturally — do not dump raw fields.
    Connector
  • Answer questions using knowledge base (uploaded documents, handbooks, files). Use for QUESTIONS that need an answer synthesized from documents or messages. Returns an evidence pack with source citations, KG entities, and extracted numbers. Modes: - 'auto' (default): Smart routing — works for most questions - 'rag': Semantic search across documents & messages - 'entity': Entity-centric queries (e.g., 'Tell me about [entity]') - 'relationship': Two-entity queries (e.g., 'How is [entity A] related to [entity B]?') Examples: - 'What did we discuss about the budget?' → knowledge.query - 'Tell me about [entity]' → knowledge.query mode=entity - 'How is [A] related to [B]?' → knowledge.query mode=relationship NOT for finding/listing files, threads, or links — use workspace.search for that.
    Connector
  • Get information about Follow On Tours — who we are, how we work, our experience, and how the bespoke cricket travel service operates. Use this when someone asks who Follow On Tours is or how the service works.
    Connector
  • List all available Pine Script v6 documentation files with descriptions. Returns files organised by category with descriptions. For small files use get_doc(path). For large files (ta.md, strategy.md, collections.md, drawing.md, general.md) use list_sections(path) then get_section(path, header).
    Connector
  • Worked-vs-On-time Execution Timeline (WOET) per-activity day-by-day classification of as-built execution against baseline. For each pairable activity (matched by ``task_code``), classifies execution into 4 day-states: - PROGRESS: work performed during the baseline-planned window - GAIN: work performed BEFORE the baseline window opened - EXTENDED: work performed AFTER the baseline window closed - VOID: baseline-window day where activity was NOT active This is a calendar-shaped visualization layer above the AACE MIP 3.4 entitlement layer — not a substitute for fragnet-based AACE 29R-03 §3.7 (TIA) modeling. It gives the trier-of-fact a calendar picture of how the project executed versus how it was supposed to execute, which is otherwise buried in finish-date deltas. Use this tool when you want a per-activity execution-quality picture (on-time %, count of activities with VOID days, etc.). Args: baseline_xer_path: server-side path to baseline XER (target dates). actual_xer_path: server-side path to as-built XER (act dates). baseline_xer_content: full text of baseline XER (alternative). actual_xer_content: full text of as-built XER (alternative). Supply EXACTLY ONE of path/content per pair. today: optional ISO date (YYYY-MM-DD) reference for in-progress activities. Defaults to actual XER's last_recalc_date if available, else today's date. Returns: { "method": "WOET", "standard": MIP 3.3 windows-derived day-classification extension citation, "today": "YYYY-MM-DD", "project_totals": {progress, gain, extended, void}, "per_activity": [{code, name, baseline_start, ...}, ...], "on_time_pct": float (0-100) }
    Connector