Skip to main content
Glama
127,390 tools. Last updated 2026-05-05 15:21

"Servers that support Server-Sent Events (SSE)" matching MCP tools:

  • Publish a single event from a partner firm into the tower stream. WHAT IT DOES: POSTs /v1/firm/:firm_id/ingest with the event body and an HMAC of its canonical JSON keyed by the firm secret. Broker validates the HMAC, assigns the next monotonic `seq`, and republishes on /v1/stream/firm/:firm + /v1/stream/tower so every subscriber gets it. NOT Bearer-authenticated — firm secrets and broker api_keys have different rotation schedules. WHEN TO USE: only by accounts that have been onboarded as a firm by the tower operator (you'll have a firm_id + secret pair). Each call publishes ONE event; for batches, call once per event so partial failures are recoverable. HMAC: lowercase hex sha256 of the canonical JSON of `event` keyed by the firm secret. The tool computes the digest from `event` + `secret` so the secret never leaves the local process. The secret itself is NOT sent to the broker — only the digest. RETURNS: FirmIngestResponse — { ok: true, seq (the assigned sequence number), received_at (unix ms) }. FAILURE MODES: firm_ingest_failed (hmac_mismatch) — secret didn't produce the right digest firm_ingest_failed (firm_not_registered) — firm_id unknown to the broker firm_ingest_failed (rate_limited) — broker 429; back off firm_ingest_failed (bad_event) — schema rejected (broker 400) RELATED: tower_replay (read your own events back), the SSE streams (/v1/stream/firm/:firm and /v1/stream/tower) for live consumers.
    Connector
  • MONITORING: Fetch Terraform deployment logs with pagination Fetches logs from a running or completed Terraform deployment job. For **completed jobs**: uses REST endpoint for instant retrieval (supports `tail` for server-side filtering). For **running jobs**: streams via SSE with timeout-based pagination. **PAGINATION** (running jobs only): Use `last_event_id` from the response to fetch more: 1. First call: `tflogs(session_id='...')` → get logs + `last_event_id` 2. Next call: `tflogs(session_id='...', last_event_id='...')` → get NEW logs only 3. Repeat until `complete: true` in response **RESPONSE FIELDS**: - `logs`: Array of log messages collected - `last_event_id`: Pass this back to get more logs (pagination cursor, SSE only) - `complete`: true if job finished, false if more logs may be available - `total_logs`: total log entries before tail truncation REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: job_id to target a specific deployment (use tfruns to discover IDs), timeout (default 50s, max 55s), last_event_id (for pagination), tail (return only last N entries) ⚠️ CONTEXT WARNING: Deploy logs can be hundreds of lines. Use tail: 50 for completed jobs to avoid blowing up the context window.
    Connector
  • Checks that the Strale API is reachable and the MCP server is running. Call this before a series of capability executions to verify connectivity, or when troubleshooting connection issues. Returns server status, version, tool count, capability count, solution count, and a timestamp. No API key required.
    Connector
  • Switch between local and remote DanNet servers on the fly. This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers. Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https:// Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active Example: # Switch to local development server result = switch_dannet_server("local") # Switch to production server result = switch_dannet_server("remote") # Switch to custom server result = switch_dannet_server("https://my-custom-dannet.example.com")
    Connector
  • Breadcrumbs leading to one error: the events from the same session, before the error timestamp, in chronological order. The "what was the user doing right before this broke" view that turns a stack trace into a story. Given an anonymous_id and the error's timestamp, finds the session that was active at that timestamp and returns up to `limit` events from that session strictly earlier than the error. Each event row has its name, url, pathname, referrer — enough to reconstruct the path the user took. Examples: - "what did this user do right before the error" → anonymous_id from errors.list, before_timestamp = the error's timestamp - "show me the last 10 actions before this crash" → limit=10 - "did the user click anything before the unhandled rejection" → check the events list for cta_click / button_clicked names Limitations: scoped to the single session that contained the error. If the error fired during a session the user later ended (closed tab), and you pass a timestamp in a later session, you'll get that later session's events instead. Returns events only — not their full property bags (use users.journey if you need the per-event context). Default limit 20, max 100. Pairs with: `errors.list` (source of the anonymous_id and timestamp pair); `users.journey` (multi-session view of the same user, when one-session breadcrumbs aren't enough).
    Connector
  • Replay ordered tower events for a single (firm, game) pair. WHAT IT DOES: GETs /v1/replay/firm/:firm/game/:game. Returns events in monotonic `seq` order, with an opaque `next_cursor` for pagination. Read only, no auth required. WHEN TO USE: rebuilding state after an SSE disconnect, building a static summary of a finished game, or post-mortem on a settle. Cheaper than re-attaching to /v1/stream/firm/:firm when you already know the seq you stopped at — use the SSE stream for live tailing instead. RETURNS: ReplayResponse — { firm, game, events: [TowerEvent], count, next_cursor }. Each TowerEvent has { seq, ts (unix ms), type, firm, game, agent_wallet, data }. PAGINATION: pass the previous response's `next_cursor` as `cursor`. When `next_cursor` is null you've reached head of stream. RELATED: tower_floors (current snapshot), firm_ingest (publish events).
    Connector

Matching MCP Servers

  • A
    license
    A
    quality
    C
    maintenance
    Integrates the LINE Messaging API with AI agents via the Model Context Protocol, supporting both stdio and SSE transport protocols. It allows agents to send messages, manage rich menus, and retrieve user profile information for LINE Official Accounts.
    Last updated
    10
    2,759
    Apache 2.0
  • A
    license
    -
    quality
    -
    maintenance
    Provides native integration with Apple Reminders and Calendar on macOS, enabling full CRUD operations and smart task management through the Model Context Protocol. It allows users to search, create, and organize reminders and calendar events using natural language via the macOS EventKit framework.
    Last updated
    338
    96

Matching MCP Connectors

  • MONITORING: Fetch Terraform deployment logs with pagination Fetches logs from a running or completed Terraform deployment job. For **completed jobs**: uses REST endpoint for instant retrieval (supports `tail` for server-side filtering). For **running jobs**: streams via SSE with timeout-based pagination. **PAGINATION** (running jobs only): Use `last_event_id` from the response to fetch more: 1. First call: `tflogs(session_id='...')` → get logs + `last_event_id` 2. Next call: `tflogs(session_id='...', last_event_id='...')` → get NEW logs only 3. Repeat until `complete: true` in response **RESPONSE FIELDS**: - `logs`: Array of log messages collected - `last_event_id`: Pass this back to get more logs (pagination cursor, SSE only) - `complete`: true if job finished, false if more logs may be available - `total_logs`: total log entries before tail truncation REQUIRES: session_id from convoopen response (format: sess_v2_...). OPTIONAL: job_id to target a specific deployment (use tfruns to discover IDs), timeout (default 50s, max 55s), last_event_id (for pagination), tail (return only last N entries) ⚠️ CONTEXT WARNING: Deploy logs can be hundreds of lines. Use tail: 50 for completed jobs to avoid blowing up the context window.
    Connector
  • Return the officers of a company - current directors, secretaries, members, partners, board members, procurists / authorised signatories, liquidators, and (by default, where upstream exposes them) historical resignations. Each officer has a unified shape (jurisdiction, officer_id, name, role, appointed_on, resigned_on, is_active) plus a `jurisdiction_data` object carrying the raw upstream fields verbatim. Role labels are passed through in the registry's native language (e.g. Styremedlem, Předseda představenstva, Président, PREZES ZARZĄDU) - translate client-side as needed. Birth-date precision varies by jurisdiction (some registries publish YYYY-MM-DD, some only month + year, some nothing). `officer_id`, when present, can be passed to `get_officer_appointments` to retrieve every other company this person has been appointed to - cross-company tracing is one of the most powerful uses of this tool. Not every jurisdiction issues stable person IDs; corporate officers are usually keyed by the corporate's own company_id, natural persons may be keyed by a synthetic index. Some registries mask officer names under GDPR / privacy rules - that masking is upstream, not server-side. Flags: `include_resigned` (default true) toggles historical entries on jurisdictions that expose both; `group_by_person` deduplicates the same person across consecutive appointments on jurisdictions that support it; `fresh: true` bypasses the cache. Flags are ignored on registries that don't support them. Jurisdictions that don't publish officer data (or that gate it behind paid extracts) return 501. Per-country caveats (role-label vocabulary, birth-date precision, resignation coverage, GDPR masking, 501 gating, delta-vs-snapshot semantics) are available on demand - call `list_jurisdictions({jurisdiction:"<code>"})` for the full schema, or `list_jurisdictions({supports_tool:"get_officers"})` for the country-support matrix. All registries are official government sources.
    Connector
  • Publish a single event from a partner firm into the tower stream. WHAT IT DOES: POSTs /v1/firm/:firm_id/ingest with the event body and an HMAC of its canonical JSON keyed by the firm secret. Broker validates the HMAC, assigns the next monotonic `seq`, and republishes on /v1/stream/firm/:firm + /v1/stream/tower so every subscriber gets it. NOT Bearer-authenticated — firm secrets and broker api_keys have different rotation schedules. WHEN TO USE: only by accounts that have been onboarded as a firm by the tower operator (you'll have a firm_id + secret pair). Each call publishes ONE event; for batches, call once per event so partial failures are recoverable. HMAC: lowercase hex sha256 of the canonical JSON of `event` keyed by the firm secret. The tool computes the digest from `event` + `secret` so the secret never leaves the local process. The secret itself is NOT sent to the broker — only the digest. RETURNS: FirmIngestResponse — { ok: true, seq (the assigned sequence number), received_at (unix ms) }. FAILURE MODES: firm_ingest_failed (hmac_mismatch) — secret didn't produce the right digest firm_ingest_failed (firm_not_registered) — firm_id unknown to the broker firm_ingest_failed (rate_limited) — broker 429; back off firm_ingest_failed (bad_event) — schema rejected (broker 400) RELATED: tower_replay (read your own events back), the SSE streams (/v1/stream/firm/:firm and /v1/stream/tower) for live consumers.
    Connector
  • Update one or more mutable fields on a registered webhook endpoint: url, events, active. At least one of these must be provided. Validation mirrors register_webhook (https-only, ≤ 2048 chars, URL safety blocklist). Returns the updated endpoint (secret is never returned by this tool — use rotate_webhook_secret for that).
    Connector
  • Run a generic M/M/c queue simulation. Provide an arrival rate (λ, arrivals/hour), a service rate per server (μ, customers/hour each server can finish), and a server count (c). Optional: distribution shapes, service coefficient of variation, run length. Returns per-hour metrics and an overall summary (avg wait, queue length, offered load, throughput). This is the primary tool for 'how many servers do I need?' / 'what's my average wait?' style questions. ALSO preferred over simulate_scenario for what-if questions about scheduled scenarios (Coffee Shop, ER) when the user wants flat uniform numbers — pull the peak params from describe_scenario and run them here. That usually matches user intent better than collapsing a schedule.
    Connector
  • Get usage summary and billing events for a time period. Returns itemized events (scans, forwards, mail sends) with costs, plus period totals. Defaults to the current billing period if no dates are specified.
    Connector
  • POST endpoint that verifies an on-chain Base mainnet USDC transfer to the published wallet and returns a bearer token (tf_live_<64-char-hex>) plus credit count. Use after the agent has sent USDC, with the tx hash and the memo from tf_payment_buy_credits. The returned token is cross-redeemable on tensorfeed.ai.
    Connector
  • Lists Vocab Voyage's MCP starter prompts (also exposed via the standard MCP prompts/list endpoint). Useful for hosts that don't yet support prompts/list.
    Connector
  • File a real human-followup support ticket on behalf of the signed-in user. Use this when the user reports a billing problem, bug, account lockout, complaint about a tutor, or anything Sparkle/the agent cannot resolve from data. The ticket is emailed to the support team and a confirmation is sent to the user with a 1-business-day SLA. Categories: billing, bug, account, complaint, feedback, other. Requires sign-in.
    Connector
  • Connectivity check — returns server version and current timestamp. Use to verify MCP server is reachable before calling other tools.
    Connector
  • Submit a support request to the Skala team on behalf of the user. Call this when the user needs human assistance that AI cannot provide, the question is too complex or high-risk, or the user explicitly asks for human support. IMPORTANT: Always confirm with the user before calling — describe what you will submit and ask for their approval. Before calling, compile the issue from conversation context into the description.
    Connector
  • Search the AI agent directory — find registered agents by name, capability, protocol support, or reputation. Powered by the live ERC-8004 registry via 8004scan (110,000+ agents indexed across 50+ chains). Returns agent identity, owner wallet/ENS, reputation scores, supported protocols (MCP/A2A/OASF), verification status, and links to 8004scan profiles. Examples: - "trading agents on Base" → search for trading agents filtered to Base chain - "MCP agents" → find agents that support the Model Context Protocol - "high reputation agents" → set minReputation to find top-scored agents
    Connector
  • Replay ordered tower events for a single (firm, game) pair. WHAT IT DOES: GETs /v1/replay/firm/:firm/game/:game. Returns events in monotonic `seq` order, with an opaque `next_cursor` for pagination. Read only, no auth required. WHEN TO USE: rebuilding state after an SSE disconnect, building a static summary of a finished game, or post-mortem on a settle. Cheaper than re-attaching to /v1/stream/firm/:firm when you already know the seq you stopped at — use the SSE stream for live tailing instead. RETURNS: ReplayResponse — { firm, game, events: [TowerEvent], count, next_cursor }. Each TowerEvent has { seq, ts (unix ms), type, firm, game, agent_wallet, data }. PAGINATION: pass the previous response's `next_cursor` as `cursor`. When `next_cursor` is null you've reached head of stream. RELATED: tower_floors (current snapshot), firm_ingest (publish events).
    Connector
  • Get the blockchain anchor proof for a specific date. Returns the SHA-256 root hash of all audit events from that date, the Base L2 transaction hash, and a Basescan verification link. Proves compliance records have not been tampered with. Free, no authentication required.
    Connector