Skip to main content
Glama
130,857 tools. Last updated 2026-05-07 16:48

"A digest or summary of popular Reddit content" matching MCP tools:

  • Publish a single event from a partner firm into the tower stream. WHAT IT DOES: POSTs /v1/firm/:firm_id/ingest with the event body and an HMAC of its canonical JSON keyed by the firm secret. Broker validates the HMAC, assigns the next monotonic `seq`, and republishes on /v1/stream/firm/:firm + /v1/stream/tower so every subscriber gets it. NOT Bearer-authenticated — firm secrets and broker api_keys have different rotation schedules. WHEN TO USE: only by accounts that have been onboarded as a firm by the tower operator (you'll have a firm_id + secret pair). Each call publishes ONE event; for batches, call once per event so partial failures are recoverable. HMAC: lowercase hex sha256 of the canonical JSON of `event` keyed by the firm secret. The tool computes the digest from `event` + `secret` so the secret never leaves the local process. The secret itself is NOT sent to the broker — only the digest. RETURNS: FirmIngestResponse — { ok: true, seq (the assigned sequence number), received_at (unix ms) }. FAILURE MODES: firm_ingest_failed (hmac_mismatch) — secret didn't produce the right digest firm_ingest_failed (firm_not_registered) — firm_id unknown to the broker firm_ingest_failed (rate_limited) — broker 429; back off firm_ingest_failed (bad_event) — schema rejected (broker 400) RELATED: tower_replay (read your own events back), the SSE streams (/v1/stream/firm/:firm and /v1/stream/tower) for live consumers.
    Connector
  • Compile a list of blocks into a Claude-optimized structured XML prompt. Takes the JSON returned by decompose_prompt (or manually crafted blocks) and produces a ready-to-use XML prompt with a token estimate. Args: blocks_json: JSON-stringified list of blocks. Each block: {"type": "role|objective|...", "content": "...", "label": "...", "description": "...", "summary": ""} Returns: The compiled XML prompt with token estimate.
    Connector
  • Schedule multiple posts at once from CSV content. USE THIS WHEN: • User has a spreadsheet or list of posts to schedule • Planning a content calendar for a month • Migrating content from another tool CSV FORMAT (required columns): • platform: linkedin, instagram, x, tiktok, threads • scheduled_time: ISO 8601 format (e.g., 2024-02-15T10:00:00Z) • text: Post content/caption OPTIONAL COLUMNS: • media_url: Image or video URL • first_comment: First comment to add (Instagram/LinkedIn) • hashtags: Additional hashtags to append PROCESS: 1. First call with validate_only: true to check for errors 2. Review validation report with user 3. Call again with validate_only: false to execute import
    Connector
  • List all generated reports with status and summary info. Returns an array of report objects with id, report_type, status, title, and summary. Use the report id with atlas_get_report for details or atlas_download_report to download completed PDFs. Free.
    Connector
  • Search notes by keyword or list recent notes. Returns summaries (id + description) only. Use get_note to retrieve the full content of a specific note. With query: Case-insensitive keyword search on description and content. Without query: Returns most recently updated notes.
    Connector
  • List, add, or remove webhook and digest subscriptions; configure or clear the agent's callback URL. WHEN TO USE - You are an answering agent and want push delivery of new consultations in your domain instead of polling browse_unanswered. - You want a daily summary of activity in a category, without real-time webhook overhead. - You need to set or rotate the HTTPS callback URL where Almured will POST signed webhook events. - You want to see your current subscription state (categories, callback domain, whether a webhook secret is set). WHEN NOT TO USE - For one-off browsing — use browse_consultations or browse_unanswered. - For unsubscribing entirely — call clear_callback (stops all webhook delivery) and unsubscribe from each category individually for digests. BEHAVIOR - Mutating (except action='list'). Auth required: API key as Authorization: Bearer <key>. Rate-limited to 10 req/min per agent. - Action contract: - 'list' — returns notification_categories, digest_categories, callback_url_domain, webhook_secret_set flag. - 'subscribe' — adds categories. Requires categories=comma-separated slugs and subscription_type ('notification' for real-time webhooks, 'digest' for daily summary). Validates against the live taxonomy. - 'unsubscribe' — removes categories. Same args as subscribe. - 'set_callback' — sets or rotates callback_url. Must start with 'https://'. On first set, returns a webhook_secret you must store immediately — it is shown once and used to verify HMAC-SHA256 signatures on inbound webhooks. - 'clear_callback' — removes callback_url and secret. All webhook delivery stops; digest delivery is unaffected. - Subscribing without a callback_url is allowed but no webhooks fire until one is set. - Webhook events are signed with the secret using HMAC-SHA256; verify the signature on every inbound POST. WORKFLOW - Set the callback URL first (set_callback), then subscribe to categories. - If you suspect the secret leaked, call set_callback again with the same URL to rotate. - Combine with get_expertise_badge to track how subscription-driven response volume affects your tier over time.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Reddit MCP — public Reddit data via JSON endpoints (no auth required)

  • Reddit trend data over time, with growth for any topic or brand. Free key at trendsmcp.ai

  • Read a workspace's doc (TipTap rich-text) body. Format is negotiable via `format`: `markdown` (default — CommonMark + GFM, ready to feed to an LLM or render in a non-ProseMirror surface), `content` (TipTap JSON, round-trippable into update_doc for structural edits), `text` (plain text, best for search, summarisation, word-count heuristics), or `all` for the legacy three-in-one shape. Default is `markdown` because it's the slice agents need 95% of the time and the JSON form on a long doc can blow past the agent harness's tool-result token cap. Pass `format: "content"` only when you're round-tripping into update_doc for a structural edit. A workspace can hold any combination of doc and table surfaces, one or many of either kind; omit `surface_slug` to read the primary doc surface, or pass it to target a specific doc tab (use `list_surfaces` to enumerate). An unwritten or absent doc returns the requested format empty (markdown="", content={}, text=""); a `surface_slug` that doesn't match any live doc surface 404s.
    Connector
  • Publish a single event from a partner firm into the tower stream. WHAT IT DOES: POSTs /v1/firm/:firm_id/ingest with the event body and an HMAC of its canonical JSON keyed by the firm secret. Broker validates the HMAC, assigns the next monotonic `seq`, and republishes on /v1/stream/firm/:firm + /v1/stream/tower so every subscriber gets it. NOT Bearer-authenticated — firm secrets and broker api_keys have different rotation schedules. WHEN TO USE: only by accounts that have been onboarded as a firm by the tower operator (you'll have a firm_id + secret pair). Each call publishes ONE event; for batches, call once per event so partial failures are recoverable. HMAC: lowercase hex sha256 of the canonical JSON of `event` keyed by the firm secret. The tool computes the digest from `event` + `secret` so the secret never leaves the local process. The secret itself is NOT sent to the broker — only the digest. RETURNS: FirmIngestResponse — { ok: true, seq (the assigned sequence number), received_at (unix ms) }. FAILURE MODES: firm_ingest_failed (hmac_mismatch) — secret didn't produce the right digest firm_ingest_failed (firm_not_registered) — firm_id unknown to the broker firm_ingest_failed (rate_limited) — broker 429; back off firm_ingest_failed (bad_event) — schema rejected (broker 400) RELATED: tower_replay (read your own events back), the SSE streams (/v1/stream/firm/:firm and /v1/stream/tower) for live consumers.
    Connector
  • Fetch a sanitized public sample section from Refpro's reference deal library. Inputs: deal_type (FF | BRRRR | NC) and section (summary | financials | risk_notes | full). Returns sanitized example markdown content for the requested section, plus a deep-link URL to the canonical version on refpro.ai. The 'full' section stitches summary, financials, and risk_notes in order. All content is sanitized example data — not a real customer deal — and is safe to surface verbatim to end users. No network calls; samples are loaded once at module init.
    Connector
  • Return a compact titles-only tree of the course: course → modules → lessons. Ideal for agents to plan reorders, spot empty lessons, or summarize a course. Does NOT include lesson body content.
    Connector
  • Propose compressing multiple related learnings into one consolidated learning. Call this AFTER get_compression_candidates and synthesizing the compressed content. Same approval flow as submit_learning: show preview to user, then confirm_compression on approval or reject_compression on decline. The compressed content should follow the format: (Issue) summary, then agent-specific nuances (e.g. grok adds X, claude adds Y).
    Connector
  • Get the full results of a completed Sieve analysis. Returns the Sieve Score (0-140), meeting decision (Take Meeting/Pass/ Need More Info), executive summary, key strengths, and key concerns. Args: deal_id: The deal ID returned by sieve_screen. sections: Comma-separated filter (e.g. 'summary,strengths,concerns'). Options: summary, profiles, findings, questions, strengths, concerns. Empty returns everything. Score and decision are always included.
    Connector
  • Fetches the complete markdown content of an Apollo documentation page using its slug, or everything after https://apollographql.com/docs. Documentation slugs can be obtained from the SearchDocs tool results. Use this after ApolloDocsSearch to read full pages rather than just excerpts. Content will be given in chunks with the totalCount field specifying the total number of chunks. Start with a chunkIndex of 0 and fetch each chunk.
    Connector
  • Returns all dataset categories and popular tags available on the Nova Scotia Open Data portal. Use this first to discover valid category names before calling search_datasets with a category filter.
    Connector
  • List all job descriptions for a hiring context. Returns an array of JD objects with id, title, and content. Use JD content as jd_text in atlas_fit_match, atlas_fit_rank, and atlas_start_jd_fit_batch. Requires context_id from atlas_create_context or atlas_list_contexts. Free.
    Connector
  • Retrieve the complete content of a specific email using its ID from search_email. Use this to read the full email body (text or HTML), see all recipients (to, cc, bcc), and access the complete headers. This is necessary after search_email since search only returns snippets, not the actual email content.
    Connector
  • Get the scraped markdown content of a source URL Peec has indexed. Use this after get_url_report to inspect the actual content an AI engine read — useful for content gap analysis and competitive content comparison. Input notes: - url is the full URL. Copy it verbatim from get_url_report output. Trailing slashes and scheme variations change the resolved source ID. - Returns 404 if Peec has no record of the URL (it hasn't been scraped from any project). - max_length caps the returned content (default 100000 characters). If the stored content is longer, truncated=true and you can re-request with a higher max_length. Returned fields: - url, title, domain, channel_title: page metadata - classification: domain-level classification - url_classification: page-level classification (HOMEPAGE, LISTICLE, COMPARISON, ...) - content: markdown content, already extracted via Mozilla Readability and converted with Turndown GFM. null when the URL is tracked but scraping hasn't completed yet (can take up to 24h). - content_length: original character length before truncation (0 when content is null) - truncated: true if content was truncated to max_length - content_updated_at: ISO timestamp of last scrape, or null if not yet scraped
    Connector
  • Get a compact intelligence digest for a set of brands — perfect for watchlist summaries, competitive briefings, and daily reports. Returns for each brand: current signal, AI visibility score+trend+grade, key relationship edges (integrations, powered-by, acquisitions), and capabilities. Excludes competitive edges to keep output focused. Args: slugs: List of brand slugs (up to 25). Returns: Dict with "digest" array (one entry per brand) and "missing_slugs".
    Connector
  • Use this tool to retrieve the full content of a single document or up to 20 documents in a single call. The document names should be obtained from the `parent` field of results from a call to the `search_documents` tool. Set the `names` parameter to a list of document names.
    Connector