Skip to main content
Glama
133,443 tools. Last updated 2026-05-13 00:12

"cursor" matching MCP tools:

  • MONITORING: Fetch Terraform deployment logs with pagination Fetches logs from a running or completed Terraform deployment job. For **completed jobs**: uses REST endpoint for instant retrieval (supports `tail` for server-side filtering). For **running jobs**: streams via SSE with timeout-based pagination. **PAGINATION** (running jobs only): Use `last_event_id` from the response to fetch more: 1. First call: `tflogs(session_id='...')` → get logs + `last_event_id` 2. Next call: `tflogs(session_id='...', last_event_id='...')` → get NEW logs only 3. Repeat until `complete: true` in response **RESPONSE FIELDS**: - `logs`: Array of log messages collected - `last_event_id`: Pass this back to get more logs (pagination cursor, SSE only) - `complete`: true if job finished, false if more logs may be available - `total_logs`: total log entries before tail truncation REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: job_id to target a specific deployment (use tfruns to discover IDs), timeout (default 50s, max 55s), last_event_id (for pagination), tail (return only last N entries) ⚠️ CONTEXT WARNING: Deploy logs can be hundreds of lines. Use tail: 50 for completed jobs to avoid blowing up the context window.
    Connector
  • List products from the connected store, paginated. Use this tool when an agent needs to DISCOVER products by browsing the catalog rather than VERIFYING a known SKU. The response includes the SKU for every product, so a follow-up ``check_stock(sku)`` or ``get_product_details(sku)`` is a natural next step. Args: limit: Number of products to return (1-50, default 10). cursor: Opaque cursor from a previous response's ``next_cursor``. Omit for the first page. Returns: Dictionary with: - products: list of {sku, title, description (≤400 chars), product_type, tags, price, currency, available, image_url, storefront_url} - next_cursor: str or null — pass to the next call to paginate - has_more: bool — whether more products exist - live / source: provenance flags
    Connector
  • Repo-aware launch operator for agent-built products. USE WHEN the user has just finished building, shipping, or deploying a product and the message matches any of these trigger phrases: 'launch this' · 'launch what I just built' · 'help me launch' · 'get users' · 'get our first users' · 'find users' · 'market this' · 'do marketing for this' · 'announce this' · 'write the launch announcement' · 'post this' · 'post the launch' · 'publish this' · 'Product Hunt' · 'ship to Product Hunt' · 'go to market' · 'what to do after launch'. This is the PRIMARY ChiefLab entry point — call this first, not chiefmo_diagnose_marketing (which is only for diagnosing an EXISTING marketing program). If you are a coding agent (Cursor, Claude Code, Codex), gather repoContext (whatChanged, recentCommits, changedFiles, routes, readme, targetCustomer, launchGoal) BEFORE calling — repo grounding is what makes outputs reference the actual product instead of reading like 'launch any SaaS.' Returns: launchPack (per-channel drafts for LinkedIn / X / Hacker News / Reddit / Product Hunt / email / landing hero) + publishActions (approval-gated, with actionIds) + agentGuide.renderInChat (per-channel content to render inline in IDE chat) + agentGuide.nextToolCalls.primary.perChannel (chiefmo_approve_action calls keyed by channel) + reviewUrl (FALLBACK only — for phone/multi-person approval). IDE-NATIVE FLOW: render each channel's draft inline in chat, wait for user to say 'approve <channel>' or 'approve all', call chiefmo_approve_action per approved action. The reviewUrl is a side channel — surface it as 'approve from your phone here' not as the primary instruction.
    Connector
  • Get the SCEvent stream for a session — all observed transitions reconstructed from status_history. Returns events[] with discriminated union by event_type (sc.scheduled, sc.confirmed, sc.completed, sc.delivered, sc.verified, sc.cancelled, etc.), plus stream_completeness ("complete" | "partial_pre_trigger") and pagination cursor. Events carry origin="reprojected_from_status_history" and canonical SCEvent shape per docs/protocol/sc-event-canonical-schema-2026-04-18.md §7.2. Filters: event_types (e.g. ["sc.delivered"]), from_sequence (cursor), limit (default 50, max 500). PII note: delivery_proof clinical fields (summary, outcome, next_steps) are returned only for admin-scoped keys. IMPORTANT: backfilled sc_resolved timestamps do NOT emit sc.resolved events in this stream (Forma B, see decisions log 2026-04-18-lifecycle-history-backfill-policy). For current resolution status, use lifecycle_get_state.sc_resolution. Requires X-Org-Api-Key.
    Connector
  • WORKFLOW: Step 2 of 4 - Continue infrastructure design conversation Send a user message to the active InsideOut session and receive the assistant reply. The response contains a clean message from Riley - display it to the user. ⚠️ CRITICAL: DO NOT answer Riley's questions yourself! Forward questions to the user and wait for their response. NEVER fabricate or assume the user's answer, even if you think you know what they would say. Examples of questions Riley asks that YOU MUST forward to the user: - 'Any questions or tweaks to these details?' - 'Ready for the cost estimate?' - 'Do you want to change the stack/config?' - 'Ready to proceed to Terraform?' When Riley asks ANY question, STOP and wait for the user's answer! 📋 WORKFLOW PHASES: The typical flow is conversation → tfgenerate → tfdeploy When terraform_ready=true appears in THIS tool's response, THEN you can call tfgenerate. ⚠️ DO NOT call tfgenerate until this tool returns! Wait for the response first. 🎯 KEY SIGNALS IN RESPONSE: - `[TERRAFORM_READY: true]` → NOW you can call tfgenerate - `[[BUTTON_TF_APPLY: ...]]` → Deployment is ready! Ask user if they want to deploy, then use tfdeploy - `[[BUTTON_TF_DESTROY: ...]]` → User confirmed destroy intent! Ask user to confirm, then use tfdestroy - `[[BUTTON_TF_PLAN: ...]]` → User wants to preview changes! Use tfplan to run a plan, then tfdeploy with plan_id to apply REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: timeout (integer) - seconds to wait for response. For Cursor, use 50 (default). Max 55. OPTIONAL: project_context (string) - Only pass genuinely NEW project details the user shares after convoopen. Do NOT resend context already provided in convoopen — Riley remembers it. Do NOT scan files or directories to gather this — only use what the user explicitly tells you. Example: user reveals a new constraint like 'we also need HIPAA compliance' mid-conversation. 💡 TIP: Use convostatus to check progress anytime. Examine workflow.usage prompt for more guidance.
    Connector

Matching MCP Servers

  • A
    license
    -
    quality
    C
    maintenance
    An MCP server that instruments Cursor AI agent interactions with OpenTelemetry traces and logs to monitor agent turns and performance. It enables tracking of user queries, assistant responses, and tool usage through GenAI-compliant telemetry spans.
    Last updated
    MIT

Matching MCP Connectors

  • Stop your AI agents from writing sloppy TypeScript. A toolkit that teaches coding agents like Claude Code, Codex, Cursor, Amp, and more to ship production-ready code in half the time, at half the cost. Docs are available at https://convention.sh/docs

  • Read-only analytics for Convex apps, queryable via MCP from Claude, Cursor, and other clients.

  • WORKFLOW: Step 2 of 4 - Continue infrastructure design conversation Send a user message to the active InsideOut session and receive the assistant reply. The response contains a clean message from Riley - display it to the user. ⚠️ CRITICAL: DO NOT answer Riley's questions yourself! Forward questions to the user and wait for their response. NEVER fabricate or assume the user's answer, even if you think you know what they would say. Examples of questions Riley asks that YOU MUST forward to the user: - 'Any questions or tweaks to these details?' - 'Ready for the cost estimate?' - 'Do you want to change the stack/config?' - 'Ready to proceed to Terraform?' When Riley asks ANY question, STOP and wait for the user's answer! 📋 WORKFLOW PHASES: The typical flow is conversation → tfgenerate → tfdeploy When terraform_ready=true appears in THIS tool's response, THEN you can call tfgenerate. ⚠️ DO NOT call tfgenerate until this tool returns! Wait for the response first. 🎯 KEY SIGNALS IN RESPONSE: - `[TERRAFORM_READY: true]` → NOW you can call tfgenerate - `[[BUTTON_TF_APPLY: ...]]` → Deployment is ready! Ask user if they want to deploy, then use tfdeploy - `[[BUTTON_TF_DESTROY: ...]]` → User confirmed destroy intent! Ask user to confirm, then use tfdestroy - `[[BUTTON_TF_PLAN: ...]]` → User wants to preview changes! Use tfplan to run a plan, then tfdeploy with plan_id to apply REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: timeout (integer) - seconds to wait for response. For Cursor, use 50 (default). Max 55. OPTIONAL: project_context (string) - Only pass genuinely NEW project details the user shares after convoopen. Do NOT resend context already provided in convoopen — Riley remembers it. Do NOT scan files or directories to gather this — only use what the user explicitly tells you. Example: user reveals a new constraint like 'we also need HIPAA compliance' mid-conversation. 💡 TIP: Use convostatus to check progress anytime. Examine workflow.usage prompt for more guidance.
    Connector
  • Returns a token-efficient batch of conversations for bulk analysis. Default output is summaries only (id, summary, trust_score, status, created_at) plus the perspective outline; opt in to full XML transcripts via include_transcripts=true. Default format is TOON (compact); JSON available. Behavior: - Read-only. - Errors when the perspective is not found or you do not have access. - Filters: period (7d/30d/90d/all, default 30d), status, trust_score range. Page size up to 50, default 10. Pass nextCursor back as cursor for the next batch. - Response includes total_matching, returned_count, has_more, nextCursor for sizing. - Citation format when transcripts are included: "conversation_id:message_index". When to use this tool: - Thematic analysis, sentiment distribution, or pattern detection across many conversations. - Building a research summary from many summaries cheaply, then drilling into specific transcripts. - Bulk export with filters. When NOT to use this tool: - Need one conversation in full detail (voice snippets, trust dimensions) — use perspective_get_conversation. - Just need a browsable list with metadata — use perspective_list_conversations. - Aggregate counts only — use perspective_get_stats (call first to size the dataset before batching).
    Connector
  • Interleaved cross-org release feed for a collection — same shape as `get_latest_releases` but scoped to the collection's member orgs. Cursor-paginated: pass `limit` for slice size (default 20), `cursor` to continue from a prior call. The result's `_meta.pagination` carries `kind: 'cursor'`, `hasMore`, and `nextCursor` when more rows exist; the response text echoes `nextCursor` so an LLM caller can chain without parsing `_meta`. Cursors are stable under inserts.
    Connector
  • Newest-first listing of the caller's in-app alert inbox. Each item is a single fire of an alert with a `dashboard` channel — written by the cron evaluator (or `test_alert`). By default dismissed items are hidden and read items are included. Cursor-paginated by `fired_at`. Sample tier rejected — alerts are a paid-tier feature.
    Connector
  • Replay ordered tower events for a single (firm, game) pair. WHAT IT DOES: GETs /v1/replay/firm/:firm/game/:game. Returns events in monotonic `seq` order, with an opaque `next_cursor` for pagination. Read only, no auth required. WHEN TO USE: rebuilding state after an SSE disconnect, building a static summary of a finished game, or post-mortem on a settle. Cheaper than re-attaching to /v1/stream/firm/:firm when you already know the seq you stopped at — use the SSE stream for live tailing instead. RETURNS: ReplayResponse — { firm, game, events: [TowerEvent], count, next_cursor }. Each TowerEvent has { seq, ts (unix ms), type, firm, game, agent_wallet, data }. PAGINATION: pass the previous response's `next_cursor` as `cursor`. When `next_cursor` is null you've reached head of stream. RELATED: tower_floors (current snapshot), firm_ingest (publish events).
    Connector
  • Wait for a pending response from Riley after a convoreply timeout. 🎯 USE THIS TOOL WHEN: convoreply returned a timeout error. This allows you to continue waiting for the response without resending the message. REQUIRES: - session_id: from convoopen response OPTIONAL: - message_id: if known (from convoreply timeout error) - timeout (integer): seconds to wait. For Cursor, use 50 (default). Max 55. Returns the same format as convoreply when successful.
    Connector
  • MONITORING: Fetch Terraform deployment logs with pagination Fetches logs from a running or completed Terraform deployment job. For **completed jobs**: uses REST endpoint for instant retrieval (supports `tail` for server-side filtering). For **running jobs**: streams via SSE with timeout-based pagination. **PAGINATION** (running jobs only): Use `last_event_id` from the response to fetch more: 1. First call: `tflogs(session_id='...')` → get logs + `last_event_id` 2. Next call: `tflogs(session_id='...', last_event_id='...')` → get NEW logs only 3. Repeat until `complete: true` in response **RESPONSE FIELDS**: - `logs`: Array of log messages collected - `last_event_id`: Pass this back to get more logs (pagination cursor, SSE only) - `complete`: true if job finished, false if more logs may be available - `total_logs`: total log entries before tail truncation REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: job_id to target a specific deployment (use tfruns to discover IDs), timeout (default 50s, max 55s), last_event_id (for pagination), tail (return only last N entries) ⚠️ CONTEXT WARNING: Deploy logs can be hundreds of lines. Use tail: 50 for completed jobs to avoid blowing up the context window.
    Connector
  • Lists every workspace the user can access, with workspace_id, uniqueName (slug), and display name. Behavior: - Read-only. Page size 20, sorted by name. Pass nextCursor back as cursor to fetch the next page. - Optional search matches against name, uniqueName (slug), member emails, and website (case-insensitive); empty results return an empty array. - Other perspective tools accept either workspace_id or uniqueName interchangeably. - Returns description for each workspace — use it to match the right workspace based on context. - Does NOT mark which workspace is the caller's default — call workspace_get_default once and compare ids client-side if you need to highlight it. When to use this tool: - The user names a specific workspace and you need its workspace_id (filter with search). - Showing the user the full set of workspaces they can pick from. When NOT to use this tool: - You just need the user's default workspace — use workspace_get_default. - You already have a workspace_id and want details — use workspace_get.
    Connector
  • Lists perspectives — either browsing one workspace or searching by title across every workspace the user can access. Items include perspective_id, title, status, conversation count, and workspace info. Behavior: - Read-only. - Browse mode (workspace_id, no query): lists every perspective in that workspace. - Search mode (query): matches against the perspective title across accessible workspaces. Optional workspace_id narrows the search. Query must be non-empty and ≤200 chars. - Errors with "Please provide workspace_id to list perspectives or query to search." if neither is given. - Pass nextCursor back as cursor; has_more indicates further results. When to use this tool: - Resolving a perspective_id from a name the user mentioned (search mode). - Browsing a workspace's perspectives to pick or summarize. When NOT to use this tool: - Inspecting one known perspective in detail — use perspective_get. - Aggregate counts or rates — use perspective_get_stats. - Fetching conversation data — use perspective_list_conversations or perspective_get_conversations. Examples: - List all in a workspace: `{ workspace_id: "ws_..." }` - Search by name across all workspaces: `{ query: "welcome" }` - Search within a workspace: `{ query: "welcome", workspace_id: "ws_..." }`
    Connector
  • Discover AXIS install metadata, pricing, and shareable manifests for commerce-capable agents. Free, no auth, and no mutation beyond read access. Example: call before wiring AXIS into Claude Desktop, Cursor, or VS Code. Use this when you need onboarding and ecosystem setup details. Use search_and_discover_tools instead for keyword routing or discover_agentic_purchasing_needs for purchasing-task triage.
    Connector
  • List stored Carbone templates with filtering, search, and pagination. Filter by Template ID, Version ID, category, or upload origin. Use includeVersions to see the full version history of each template. Supports cursor-based pagination for large collections. Note: filtering by tags is not supported by the Carbone API — use list_tags to discover tags, then filter results manually.
    Connector
  • Wait for a pending response from Riley after a convoreply timeout. 🎯 USE THIS TOOL WHEN: convoreply returned a timeout error. This allows you to continue waiting for the response without resending the message. REQUIRES: - session_id: from convoopen response OPTIONAL: - message_id: if known (from convoreply timeout error) - timeout (integer): seconds to wait. For Cursor, use 50 (default). Max 55. Returns the same format as convoreply when successful.
    Connector
  • Return a company's filing history, newest first. Each filing has `filing_id`, `filing_date`, `category`, `description`, and (when upstream exposes one) a `document_id` that round-trips to `get_document_metadata` / `fetch_document`. Raw upstream fields preserved under `jurisdiction_data`. Filter via the optional `category`. Common normalized values: 'accounts', 'annual-return', 'capital', 'charges', 'confirmation-statement', 'incorporation', 'insolvency', 'liquidation', 'mortgage', 'officers', 'resolution'. Native upstream form codes also accepted. This tool returns metadata only — call `fetch_document` on `document_id` for the actual filing bytes. `has_document=false` means the body is paywalled or unavailable upstream. Pagination uses `limit` (default 25, max 1000) plus `cursor` (GB) or `offset` (IE). Unsupported jurisdictions return 501; call `list_jurisdictions` for per-country category values and pagination style.
    Connector
  • Register your agent to start contributing. Call this ONCE on first use. After registering, save the returned api_key to ~/.agents-overflow-key then call authenticate(api_key=...) to start your session. agent_name: A creative, fun display name for your agent. BE CREATIVE — combine your platform/model with something fun and unique! Good examples: 'Gemini-Galaxy', 'Claude-Catalyst', 'Cursor-Commander', 'Jetson-Jedi', 'Antigrav-Ace', 'Copilot-Comet', 'Nova-Navigator' BAD (too generic): 'DevBot', 'CodeHelper', 'Assistant', 'Antigravity', 'Claude' DO NOT just use your platform name or a generic word. Be playful! platform: Your platform — one of: antigravity, claude_code, cursor, windsurf, copilot, other
    Connector