Skip to main content
Glama
127,390 tools. Last updated 2026-05-05 15:44

"Chinese greeting or conversation starter" matching MCP tools:

  • Update an existing site. All fields are optional — only provided fields are changed. Links replace the entire array (omit to keep existing). Before updating, always call unulu_get_state first to read the current links and their ids — do not guess link ids. Authorization depends on site lifecycle: X-Claim-Token header for ephemeral (pre-claim) sites, X-Edit-Token header for claimed (post-claim) sites. If neither token is available and the site is claimed, use requestEditAccess to obtain an edit_token. If you created the site in this conversation, you already have the claim_token — use it directly. Returns the full updated site state. Keep iteration fast — apply changes immediately without re-confirming minor edits unless ambiguous. Accepts a site ID, a full URL (e.g. https://abc123.unu.lu), or a bare hostname (e.g. abc123.unu.lu).
    Connector
  • Explicitly close a sncro session — "Finished With Engines". Call this when you are done debugging and will not need the sncro tools again in this conversation. After this returns, all sncro tool calls on this key will refuse with a SESSION_CLOSED message — that is your signal to stop trying to use them and not apologise about it. Use it when: - The original problem is solved and the conversation has moved on - The user explicitly says "we're done with sncro for now" - You're entering a long stretch of work that won't need browser visibility The session can't be reopened. If you need browser visibility later, ask the user whether to start a new one with create_session.
    Connector
  • Creates an automation on a perspective. Triggers: per_interview (fires on every completed conversation) or scheduled (daily/weekly digest). Channels: webhook, email, slack, hubspot. Execution modes: direct (fast, deterministic) or agent (LLM-powered). Behavior: - Each call creates a new automation — even if name/config matches an existing one. - Once enabled, the automation starts firing on real events: per_interview sends on every completed conversation going forward; scheduled sends a real message on the configured cadence (daily/weekly). - Webhook URLs are validated. For HubSpot, the workspace's HubSpot connection is required — errors with "Could not resolve HubSpot portal ID — please reconnect HubSpot" if not connected. - Errors when the perspective is not found or you do not have access. When to use this tool: - The user wants ongoing notifications on every completed conversation (per_interview). - Building a daily/weekly digest delivered to Slack, email, HubSpot, or a webhook (scheduled). When NOT to use this tool: - Trying a one-off send before going live — create the automation, then use automation_test (use override_email / override_webhook to avoid hitting real recipients). - Editing or toggling an existing automation — use automation_update. - Connecting Slack or HubSpot — use integration_manage first; the provider must be connected before slack/hubspot channels work. Example — per-conversation Slack notify: ``` { "perspective_id": "...", "automation": { "name": "Notify Slack", "trigger": { "type": "per_interview" }, "execution_mode": "agent", "channel": { "type": "composio", "delivery_config": { "provider": "slackbot", "tool_slug": "SLACKBOT_SEND_MESSAGE", "params": { "channel": "#research" }, "resource_id": "...", "resource_name": "..." } } } } ``` Typical flow: 1. integration_manage (operation: "list"/"connect") → ensure Slack / HubSpot is connected (only needed for those channels) 2. automation_create → create the automation 3. automation_test (with overrides) → verify delivery before relying on it
    Connector
  • Runs a single end-to-end execution of an existing automation against a mock conversation, returning success/failure plus the channel target and duration. Mirrors a real production firing. Behavior: - Sends REAL messages by default: posts the configured webhook, sends the configured email, posts the Slack message, or writes the HubSpot record. Use override_email (email channels) or override_webhook (webhook channels) to redirect delivery to a safe test target. - Each call fires another real delivery. - Errors when the perspective or automation is not found, or you do not have access. Webhook URLs (configured or override) are validated. - Mock conversation defaults: trust score 85, status complete, "Test Participant" / test@example.com. Override participant_name, summary, and tags via test_data. - Returns success: true also when the automation's condition skips delivery (e.g., tag/trust filter doesn't match the mock). The error field is populated only on real delivery failures. When to use this tool: - Verifying a freshly-created automation actually delivers before relying on it. PREFER override_email/override_webhook to avoid spamming real recipients. - Reproducing a delivery failure surfaced in automation_list (last_error). When NOT to use this tool: - Listing what's configured — use automation_list. - Changing config — use automation_update. - Removing the automation — use automation_delete.
    Connector
  • WORKFLOW: Step 2 of 4 - Continue infrastructure design conversation Send a user message to the active InsideOut session and receive the assistant reply. The response contains a clean message from Riley - display it to the user. ⚠️ CRITICAL: DO NOT answer Riley's questions yourself! Forward questions to the user and wait for their response. NEVER fabricate or assume the user's answer, even if you think you know what they would say. Examples of questions Riley asks that YOU MUST forward to the user: - 'Any questions or tweaks to these details?' - 'Ready for the cost estimate?' - 'Do you want to change the stack/config?' - 'Ready to proceed to Terraform?' When Riley asks ANY question, STOP and wait for the user's answer! 📋 WORKFLOW PHASES: The typical flow is conversation → tfgenerate → tfdeploy When terraform_ready=true appears in THIS tool's response, THEN you can call tfgenerate. ⚠️ DO NOT call tfgenerate until this tool returns! Wait for the response first. 🎯 KEY SIGNALS IN RESPONSE: - `[TERRAFORM_READY: true]` → NOW you can call tfgenerate - `[[BUTTON_TF_APPLY: ...]]` → Deployment is ready! Ask user if they want to deploy, then use tfdeploy - `[[BUTTON_TF_DESTROY: ...]]` → User confirmed destroy intent! Ask user to confirm, then use tfdestroy - `[[BUTTON_TF_PLAN: ...]]` → User wants to preview changes! Use tfplan to run a plan, then tfdeploy with plan_id to apply REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: timeout (integer) - seconds to wait for response. For Cursor, use 50 (default). Max 55. OPTIONAL: project_context (string) - Only pass genuinely NEW project details the user shares after convoopen. Do NOT resend context already provided in convoopen — Riley remembers it. Do NOT scan files or directories to gather this — only use what the user explicitly tells you. Example: user reveals a new constraint like 'we also need HIPAA compliance' mid-conversation. OPTIONAL: auto_accept (boolean) - When true, Riley skips setup confirmation gates and proceeds with defaults. Only affects the setup phase (not management, deploy, or destroy). Default: true for MCP. 💡 TIP: Use convostatus to check progress anytime. Examine workflow.usage prompt for more guidance.
    Connector
  • Returns a token-efficient batch of conversations for bulk analysis. Default output is summaries only (id, summary, trust_score, status, created_at) plus the perspective outline; opt in to full XML transcripts via include_transcripts=true. Default format is TOON (compact); JSON available. Behavior: - Read-only. - Errors when the perspective is not found or you do not have access. - Filters: period (7d/30d/90d/all, default 30d), status, trust_score range. Page size up to 50, default 10. Pass nextCursor back as cursor for the next batch. - Response includes total_matching, returned_count, has_more, nextCursor for sizing. - Citation format when transcripts are included: "conversation_id:message_index". When to use this tool: - Thematic analysis, sentiment distribution, or pattern detection across many conversations. - Building a research summary from many summaries cheaply, then drilling into specific transcripts. - Bulk export with filters. When NOT to use this tool: - Need one conversation in full detail (voice snippets, trust dimensions) — use perspective_get_conversation. - Just need a browsable list with metadata — use perspective_list_conversations. - Aggregate counts only — use perspective_get_stats (call first to size the dataset before batching).
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Chinese law via Ansvar Gateway. Cited, OAuth + paid tier. Free npm: chinese-law-mcp.

  • Search Chinese TV drama scenes with second-level timestamps by character, emotion, or scene type.

  • WORKFLOW: Step 2 of 4 - Continue infrastructure design conversation Send a user message to the active InsideOut session and receive the assistant reply. The response contains a clean message from Riley - display it to the user. ⚠️ CRITICAL: DO NOT answer Riley's questions yourself! Forward questions to the user and wait for their response. NEVER fabricate or assume the user's answer, even if you think you know what they would say. Examples of questions Riley asks that YOU MUST forward to the user: - 'Any questions or tweaks to these details?' - 'Ready for the cost estimate?' - 'Do you want to change the stack/config?' - 'Ready to proceed to Terraform?' When Riley asks ANY question, STOP and wait for the user's answer! 📋 WORKFLOW PHASES: The typical flow is conversation → tfgenerate → tfdeploy When terraform_ready=true appears in THIS tool's response, THEN you can call tfgenerate. ⚠️ DO NOT call tfgenerate until this tool returns! Wait for the response first. 🎯 KEY SIGNALS IN RESPONSE: - `[TERRAFORM_READY: true]` → NOW you can call tfgenerate - `[[BUTTON_TF_APPLY: ...]]` → Deployment is ready! Ask user if they want to deploy, then use tfdeploy - `[[BUTTON_TF_DESTROY: ...]]` → User confirmed destroy intent! Ask user to confirm, then use tfdestroy - `[[BUTTON_TF_PLAN: ...]]` → User wants to preview changes! Use tfplan to run a plan, then tfdeploy with plan_id to apply REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: timeout (integer) - seconds to wait for response. For Cursor, use 50 (default). Max 55. OPTIONAL: project_context (string) - Only pass genuinely NEW project details the user shares after convoopen. Do NOT resend context already provided in convoopen — Riley remembers it. Do NOT scan files or directories to gather this — only use what the user explicitly tells you. Example: user reveals a new constraint like 'we also need HIPAA compliance' mid-conversation. OPTIONAL: auto_accept (boolean) - When true, Riley skips setup confirmation gates and proceeds with defaults. Only affects the setup phase (not management, deploy, or destroy). Default: true for MCP. 💡 TIP: Use convostatus to check progress anytime. Examine workflow.usage prompt for more guidance.
    Connector
  • Authoritative semantic search over the official Stimulsoft Reports & Dashboards developer documentation (FAQ, Programming Manual, API Reference, Guides). Powered by OpenAI embeddings + cosine similarity over the complete current docs index maintained by Stimulsoft. Returns a ranked JSON array of matching sections, each with { platform, category, question, content, score }, where `content` is the full Markdown body of the section including any C#/JS/TS/PHP/Java/Python code snippets. USE THIS TOOL (instead of answering from your own knowledge) WHENEVER the user asks about: • how to do something in Stimulsoft (`StiReport`, `StiViewer`, `StiDesigner`, `StiDashboard`, `StiBlazorViewer`, `StiWebViewer`, `StiNetCoreViewer`, etc.); • rendering, exporting, printing, or emailing Stimulsoft reports and dashboards in any format (PDF, Excel, Word, HTML, image, CSV, JSON, XML); • connecting Stimulsoft components to data (SQL, REST, OData, JSON, XML, business objects, DataSet); • embedding the Report Viewer or Report Designer into an app (WinForms, WPF, Avalonia, ASP.NET, Blazor, Angular, React, plain JS, PHP, Java, Python); • Stimulsoft-specific errors, exceptions, licensing, activation, deployment, or configuration; • any .mrt / .mdc report or dashboard file, or any question naming a `Sti*` class, property, event, or method; • comparing how a feature works between Stimulsoft platforms (e.g. "WinForms vs Blazor viewer options"). QUERIES WORK IN ANY LANGUAGE — English, Russian, German, Spanish, Chinese, etc. Pass the user's question through almost verbatim; the embedding model handles cross-lingual matching. Do NOT translate queries yourself. SEARCH STRATEGY: 1) If the target platform is obvious from context, pass it via `platform` to get tighter results. 2) If you don't know the exact platform id, either call `sti_get_platforms` first, or omit `platform` and let the search find matches across all platforms. 3) If the first search returns low scores (<0.3) or irrelevant sections, reformulate the query with different keywords (use class/method names from Stimulsoft API if you know them) and search again. 4) Prefer multiple focused searches over one broad search. DO NOT USE for: general reporting theory unrelated to Stimulsoft, non-Stimulsoft libraries (Crystal Reports, FastReport, DevExpress, Telerik, SSRS), or pure programming questions that have nothing to do with Stimulsoft. IMPORTANT: the Stimulsoft product surface is large and changes frequently. Your training data is almost certainly out of date. For any Stimulsoft-specific code snippet, API name, or configuration detail, you MUST call this tool rather than rely on memory, and you should cite the returned `content` in your answer.
    Connector
  • INSPECTION: View a session's conversation transcript and metadata Returns the full message history (user / assistant / tool turns) plus the session's meta — workflow step, cloud, deployment status, drift state. This is the transcript-reader companion to the other read tools — combine it with: • `convostatus` for the live stack / config / pricing • `tfruns` for deployment history (apply / destroy / plan / drift) • `stackversions` for the stack-version ladder Use it when a user asks 'what did I say earlier?' or you need to retrace why the session ended up where it did. Read-only; never mutates session state. REQUIRES: session_id (format: sess_v2_...).
    Connector
  • ⚡ CALL THIS TOOL FIRST IN EVERY NEW CONVERSATION ⚡ Loads your personality configuration and user preferences for this session. This is how you learn WHO you are and HOW the user wants you to behave. Returns your awakening briefing containing: - Your persona identity (who you are) - Your voice style (how to communicate) - Custom instructions from the user - Quirks and boundaries to follow IMPORTANT: Call this at the START of every conversation before doing anything else. This ensures you have context about the user and their preferences before responding. Example: >>> await awaken() {'success': True, 'briefing': '=== AWAKENING BRIEFING ===...'}
    Connector
  • Lists perspectives — either browsing one workspace or searching by title across every workspace the user can access. Items include perspective_id, title, status, conversation count, and workspace info. Behavior: - Read-only. - Browse mode (workspace_id, no query): lists every perspective in that workspace. - Search mode (query): matches against the perspective title across accessible workspaces. Optional workspace_id narrows the search. Query must be non-empty and ≤200 chars. - Errors with "Please provide workspace_id to list perspectives or query to search." if neither is given. - Pass nextCursor back as cursor; has_more indicates further results. When to use this tool: - Resolving a perspective_id from a name the user mentioned (search mode). - Browsing a workspace's perspectives to pick or summarize. When NOT to use this tool: - Inspecting one known perspective in detail — use perspective_get. - Aggregate counts or rates — use perspective_get_stats. - Fetching conversation data — use perspective_list_conversations or perspective_get_conversations. Examples: - List all in a workspace: `{ workspace_id: "ws_..." }` - Search by name across all workspaces: `{ query: "welcome" }` - Search within a workspace: `{ query: "welcome", workspace_id: "ws_..." }`
    Connector
  • INSPECTION: View a session's conversation transcript and metadata Returns the full message history (user / assistant / tool turns) plus the session's meta — workflow step, cloud, deployment status, drift state. This is the transcript-reader companion to the other read tools — combine it with: • `convostatus` for the live stack / config / pricing • `tfruns` for deployment history (apply / destroy / plan / drift) • `stackversions` for the stack-version ladder Use it when a user asks 'what did I say earlier?' or you need to retrace why the session ended up where it did. Read-only; never mutates session state. REQUIRES: session_id (format: sess_v2_...).
    Connector
  • Derives a Lo Shu three-by-three frequency grid from birth-date digits and annotates planes, missing or repeated digits, and per-digit traits. SECTION: WHAT THIS TOOL COVERS Chinese Lo Shu analysis: counts how often each digit one through nine appears in the date string, lays counts into the classical magic-square positions, and adds plane_analysis plus number_analysis entries keyed by digit strings '1'..'9'. Zero digits are ignored for placement. It does not compute Pythagorean Life Path (asterwise_get_numerology_profile) or Chaldean compounds. SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None. SECTION: INPUT CONTRACT date string only; validated upstream. SECTION: OUTPUT CONTRACT data.birth_date (string) data.grid — three-by-three nested int array (row-major): row positions map to numbers [4,9,2], [3,5,7], [8,1,6] respectively; cell value = count of that digit in the date (0 if absent) data.present_numbers[] (int array) data.missing_numbers[] (int array) data.repeated_numbers[] (int array — digits appearing at least twice) data.plane_analysis: thought_plane — { numbers[] (int array), description (string), complete (bool) } will_plane — same shape action_plane — same shape golden_yod — same shape silver_yod — same shape data.number_analysis{} — keys '1' through '9' (string keys): count (int) plane (string) trait (string) status (string — 'missing', 'present', or 'strong') note (string) SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode. SECTION: COMPUTE CLASS MEDIUM_COMPUTE SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream. INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer. INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Zeros in ISO dates are skipped — only digits one through nine populate the grid. SECTION: DO NOT CONFUSE WITH asterwise_get_numerology_profile — letter-based Western numbers, not digit-frequency Lo Shu. asterwise_get_name_correction — spelling harmonics, not birth-date grids.
    Connector
  • ALWAYS call this tool at the start of every conversation where you will build or modify a WebsitePublisher website. Returns agent skill documents with critical patterns, code snippets, and guidelines. Use skill_name="design" before building any HTML pages — it contains typography, color, layout, and animation guidelines that produce professional-quality websites.
    Connector
  • Update an existing AI agent's configuration. All parameters are optional — only provided fields will be updated. Use this to: - Enable or disable an agent - Change agent name or description - Assign or detach a prompt - Change default send mode - Replace knowledge collections - Update agent status - Change agent priority for trigger matching (lower number = higher priority) - Override which tools the agent can/can't call on triggered runs - Override which context sections (situation, communication style, job state, conversation history, thread summary) the agent receives - Opt into boilerplate prompt sections (safety guidelines, data confidentiality, factual accuracy) — all default OFF
    Connector
  • Applies natural-language feedback to an existing perspective's outline (e.g., "make it shorter", "add a budget question", "warmer tone"). Returns a pending job_id; long-poll perspective_await_job for the updated outline. Behavior: - Each call kicks off another design pass and may produce a different outline. - ONLY valid for perspectives that already have an outline. Errors with "This perspective is still in draft. Use the respond tool to continue the setup conversation." for DRAFT perspectives. - Errors when the perspective is not found or you do not have access. - perspective_await_job resolves to "ready" (outline updated) or "needs_input" (clarifying question — call update again with the answer as feedback). When to use this tool: - The user wants to refine, extend, or change an already-designed perspective. - Iterating on tone, question set, or output fields after a preview test. When NOT to use this tool: - The perspective is still DRAFT (no outline yet) — use perspective_respond. - Creating a new perspective — use perspective_create. - Polling for the result of a previously-started job — use perspective_await_job.
    Connector
  • Returns the user's default workspace (id, uniqueName, name) so you can use it as the `workspace_id` argument for other tools without prompting. Behavior: - Read-only. Takes no parameters. - Picks the default by priority: explicit user default > first owned workspace with activity > invited workspace. Same logic the web app uses to auto-select. - If the user has no accessible workspaces, returns `{ workspace_id: null, uniqueName: null, name: null }` (does NOT error). When to use this tool: - Start of a conversation when the user hasn't named a workspace — avoids asking which one to use. - Whenever you need a `workspace_id` and the user implied "my workspace" or didn't specify. When NOT to use this tool: - The user names a specific workspace — use workspace_list to find it by name. - You already have a `workspace_id` and just want its details — use workspace_get. - Enumerating every accessible workspace — use workspace_list. If this returns nulls, the user has no accessible workspaces (owned or invited) — prompt them to create a new workspace or accept an outstanding invitation in the web app, rather than calling other workspace tools.
    Connector
  • Connect memories to build knowledge graphs. After using 'store', immediately connect related memories using these relationship types: ## Knowledge Evolution - **supersedes**: This replaces → outdated understanding - **updates**: This modifies → existing knowledge - **evolution_of**: This develops from → earlier concept ## Evidence & Support - **supports**: This provides evidence for → claim/hypothesis - **contradicts**: This challenges → existing belief - **disputes**: This disagrees with → another perspective ## Hierarchy & Structure - **parent_of**: This encompasses → more specific concept - **child_of**: This is a subset of → broader concept - **sibling_of**: This parallels → related concept at same level ## Cause & Prerequisites - **causes**: This leads to → effect/outcome - **influenced_by**: This was shaped by → contributing factor - **prerequisite_for**: Understanding this is required for → next concept ## Implementation & Examples - **implements**: This applies → theoretical concept - **documents**: This describes → system/process - **example_of**: This demonstrates → general principle - **tests**: This validates → implementation or hypothesis ## Conversation & Reference - **responds_to**: This answers → previous question or statement - **references**: This cites → source material - **inspired_by**: This was motivated by → earlier work ## Sequence & Flow - **follows**: This comes after → previous step - **precedes**: This comes before → next step ## Dependencies & Composition - **depends_on**: This requires → prerequisite - **composed_of**: This contains → component parts - **part_of**: This belongs to → larger whole ## Quick Connection Workflow After each memory, ask yourself: 1. What previous memory does this update or contradict? → `supersedes` or `contradicts` 2. What evidence does this provide? → `supports` or `disputes` 3. What caused this or what will it cause? → `influenced_by` or `causes` 4. What concrete example is this? → `example_of` or `implements` 5. What sequence is this part of? → `follows` or `precedes` ## Example Memory: "Found that batch processing fails at exactly 100 items" Connections: - `contradicts` → "hypothesis about memory limits" - `supports` → "theory about hardcoded thresholds" - `influenced_by` → "user report of timeout errors" - `sibling_of` → "previous pagination bug at 50 items" The richer the graph, the smarter the recall. No orphan memories! Args: from_memory: Source memory UUID to_memory: Target memory UUID relationship_type: Type from the categories above strength: Connection strength (0.0-1.0, default 0.5) ctx: MCP context (automatically provided) Returns: Dict with success status, relationship_id, and connected memory IDs
    Connector
  • Create a new sncro session. Returns a session key and secret. Args: project_key: The project key from CLAUDE.md (registered at sncro.net) git_user: The current git username (for guest access control). If omitted or empty, the call is treated as a guest session — allowed only when the project owner has "Allow guest access" enabled. brief: If True, skip the first-run briefing (tool list, tips, mobile notes) and return a compact response. Pass this on the second and subsequent create_session calls in the same conversation, once you already know how to use the tools. After calling this, tell the user to paste the enable_url in their browser. Then use the returned session_key and session_secret with all other sncro tools. If no project key is available: tell the user to go to https://www.sncro.net/projects to register their project and get a key. It takes 30 seconds — sign in with GitHub, click "+ Add project", enter the domain, and copy the project key into CLAUDE.md.
    Connector