Skip to main content
Glama
127,003 tools. Last updated 2026-05-05 06:55

"A tool for extracting text from videos" matching MCP tools:

  • Fetches any public web page and returns clean, readable plain text stripped of HTML, navigation, scripts, advertisements, and boilerplate. Returns the page title, meta description, word count, and main body text ready for analysis or summarisation. Use this tool when an agent needs to read the content of a specific web page or article URL — for example to summarise an article, extract facts from a page, verify a claim by reading the source, or convert a web page into plain text to pass to another tool. Pass article URLs returned by web_news_headlines to this tool to read full article content. Do not use this tool to discover current news headlines — use web_news_headlines instead. Does not execute JavaScript — best suited for standard HTML content pages. Will not work with paywalled, login-protected, or JavaScript-rendered single-page applications.
    Connector
  • Expand one author into a deduplicated paper list. This is the main author->paper traversal tool and supports research filters. Use `author_id` when you already know the exact author, or `author_name` plus `candidate_index` after `scholarfetch_author_candidates`. Supported comma-separated `filters`: year>=YYYY, year<=YYYY, year=YYYY, has:abstract, has:doi, has:pdf, venue:<text>, title:<text>, doi:<text>. If you pass `engines`, it must include `openalex`.
    Connector
  • Primary tool for reading a filing's content. Pass a `document_id` from `list_filings` / `get_financials`. MANDATORY for any substantive answer - filing metadata (dates, form codes, descriptions) alone doesn't answer the user; the numbers and text live inside the document. ── RESPONSE SHAPES ── • `kind='embedded'` (PDF up to ~20 MB; structured text up to `max_bytes`): returns `bytes_base64` with the full document, `source_url_official` (evergreen registry URL for citation, auto-resolved), and `source_url_direct` (short-TTL signed proxy URL). For PDFs the host converts bytes into a document content block - you read it natively including scans. • `kind='resource_link'` (document exceeds `max_bytes`): NO `bytes_base64`. Returns `reason`, `next_steps`, the two source URLs, plus `index_preview` for PDFs (`{page_count, text_layer, outline_present, index_status}`). Use the navigation tools below. ── WORKFLOW FOR kind='resource_link' ── 1. Read `index_preview.text_layer`. Values: `full` (every page has real text), `partial` (mixed), `none` (scanned / image-only), `oversized_skipped` (indexing skipped), `encrypted` / `failed`. 2. If `full` / `partial`: call `get_document_navigation` (outline + previews + landmarks) and/or `search_document` to locate pages. If `none` / `oversized_skipped`: skip search. 3. Call `fetch_document_pages(pages='N-M', format='pdf'|'text'|'png')` to get actual content. Prefer `pdf` for citations, `text` for skim, `png` for scanned or oversized. ── CRITICAL RULES ── • **Navigation-aids-only**: previews, snippets, landmark matches, and outline titles returned by the navigation tools are for LOCATING pages. NEVER cite them as source material - quote only from `fetch_document_pages` output or this tool's inline bytes. • **No fallback to memory**: if this tool fails (rate limit, 5xx, disconnect), do NOT fill in names / numbers / dates from training data. Tell the user what failed and offer retry or `source_url_official`. • Don't reflexively retry with a larger `max_bytes` - for big PDFs the bytes are unreadable to you anyway. Use the navigation tools instead. `source_url_official` is auto-resolved from a session-side cache populated by the most recent `list_filings` call. The optional `company_id` / `transaction_id` / `filing_type` / `filing_description` inputs are OVERRIDES for the rare case where `document_id` didn't come through `list_filings`. Per-country document availability, format, and pricing - call `list_jurisdictions({jurisdiction:"<code>"})`.
    Connector
  • DEFAULT tool for user-facing reciter-listing questions. Use this for ANY user-facing query like 'what reciters are available', 'who can recite for me', 'list Quran reciters'. This is the FINAL tool call for these requests; do not follow it with lookup_reciters. Shows the catalog in an interactive widget the user can browse. ONLY use lookup_reciters instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into another tool (e.g. play_ayahs) in the same turn without showing the list. When in doubt, use this widget.
    Connector
  • Fetch the next page of a large tool response. Use the nextCursor from _pagination in a previous response. This tool loads data into the context window — prefer the artifact download URL when available.
    Connector
  • Wait for a pending response from Riley after a convoreply timeout. 🎯 USE THIS TOOL WHEN: convoreply returned a timeout error. This allows you to continue waiting for the response without resending the message. REQUIRES: - session_id: from convoopen response OPTIONAL: - message_id: if known (from convoreply timeout error) - timeout (integer): seconds to wait. For Cursor, use 50 (default). Max 55. Returns the same format as convoreply when successful.
    Connector

Matching MCP Servers

  • F
    license
    -
    quality
    B
    maintenance
    Enables LLM clients to programmatically generate videos using Plainly's API by listing available video templates, retrieving template parameters, submitting render requests, and checking render status.
    Last updated
    11
    6

Matching MCP Connectors

  • Retrieves AI-generated summaries of web search results using Brave's Summarizer API. This tool processes search results to create concise, coherent summaries of information gathered from multiple sources. When to use: - When you need a concise overview of complex topics from multiple sources - For quick fact-checking or getting key points without reading full articles - When providing users with summarized information that synthesizes various perspectives - For research tasks requiring distilled information from web searches Returns a text summary that consolidates information from the search results. Optional features include inline references to source URLs and additional entity information. Requirements: Must first perform a web search using brave_web_search with summary=true parameter. Requires a Pro AI subscription to access the summarizer functionality.
    Connector
  • DEFAULT tool for user-facing reciter-listing questions. Use this for ANY user-facing query like 'what reciters are available', 'who can recite for me', 'list Quran reciters'. This is the FINAL tool call for these requests; do not follow it with lookup_reciters. Shows the catalog in an interactive widget the user can browse. ONLY use lookup_reciters instead when EITHER (a) the user explicitly asks for plain text / raw data, OR (b) you will pipe the result into another tool (e.g. play_ayahs) in the same turn without showing the list. When in doubt, use this widget.
    Connector
  • Delete a single item by id. `kind` MUST match the item type: 'text' for text nodes, 'line' for freehand strokes, 'image' for images — the wrong kind silently targets the wrong table and is a common mistake. Get the id + type from `get_board` (texts[], lines[], images[]). There is no bulk/erase-all tool: loop if you need to delete multiple items.
    Connector
  • Schedule multiple posts at once from CSV content. USE THIS WHEN: • User has a spreadsheet or list of posts to schedule • Planning a content calendar for a month • Migrating content from another tool CSV FORMAT (required columns): • platform: linkedin, instagram, x, tiktok, threads • scheduled_time: ISO 8601 format (e.g., 2024-02-15T10:00:00Z) • text: Post content/caption OPTIONAL COLUMNS: • media_url: Image or video URL • first_comment: First comment to add (Instagram/LinkedIn) • hashtags: Additional hashtags to append PROCESS: 1. First call with validate_only: true to check for errors 2. Review validation report with user 3. Call again with validate_only: false to execute import
    Connector
  • Semantic search across the Civis knowledge base of agent build logs. Returns the most relevant solutions for a given problem or query. Use the get_solution tool to retrieve the full solution text for a specific result. Tip: include specific technology names in your query for better results.
    Connector
  • Get customer testimonials tied to a specific project (by slug or keyword) from the testimonials table. Returns star rating, customer name, project name, and quote text. Use to source social proof or case-study quotes for a particular job. For unfiltered reviews, use list_reviews.
    Connector
  • Create a job description from text within a hiring context. Returns a JD object with 'id' and stored content. Use JD content as jd_text in atlas_fit_match, atlas_fit_rank, atlas_start_jd_fit_batch, and atlas_start_jd_analysis. Requires context_id from atlas_create_context or atlas_list_contexts. Free.
    Connector
  • Add a document to a deal's data room. Creates the deal if needed. This is the primary way to get documents into Sieve for screening. Upload a pitch deck, financials, or any document -- then call sieve_screen to analyze everything in the data room. Provide company_name to create a new deal (or find existing), or deal_id to add to an existing deal. Provide exactly one content source: file_path (local file), text (raw text/markdown), or url (fetch from URL). Args: title: Document title (e.g. "Pitch Deck Q1 2026"). company_name: Company name -- creates deal if new, finds existing if not. deal_id: Add to an existing deal (from sieve_deals or previous sieve_dataroom_add). website_url: Company website URL (used when creating a new deal). document_type: Type: 'pitch_deck', 'financials', 'legal', or 'other'. file_path: Path to a local file (PDF, DOCX, XLSX). The tool reads and uploads it. text: Raw text or markdown content (alternative to file). url: URL to fetch document from (alternative to file).
    Connector
  • Fetches any public web page and returns clean, readable plain text stripped of HTML, navigation, scripts, advertisements, and boilerplate. Returns the page title, meta description, word count, and main body text ready for analysis or summarisation. Use this tool when an agent needs to read the content of a specific web page or article URL — for example to summarise an article, extract facts from a page, verify a claim by reading the source, or convert a web page into plain text to pass to another tool. Pass article URLs returned by web_news_headlines to this tool to read full article content. Do not use this tool to discover current news headlines — use web_news_headlines instead. Does not execute JavaScript — best suited for standard HTML content pages. Will not work with paywalled, login-protected, or JavaScript-rendered single-page applications.
    Connector
  • DEFAULT tool for user-facing translation display. Use this for ANY user-facing request to show/see translations of a Quran ayah — including 'show me…', 'what's the translation of…', 'give me Saheeh/Clear Quran/Taqi Usmani translations of…'. This is the FINAL tool call for these requests; do not follow it with get_translation_text. ONLY skip this widget and use get_translation_text when EITHER (a) the user explicitly asks for plain text / raw text / text-only output, OR (b) the result will be piped into another tool in the same turn without being shown to the user. When in doubt, use this widget. SLUG HANDLING: If the user names a specific translator (e.g. 'Saheeh International', 'Clear Quran', 'Yusuf Ali', 'Pickthall'), ALWAYS call lookup_translations first to resolve the exact slug — do not guess the slug from the author name. Guessed slugs routinely fail validation (the naming isn't fully pattern-based: it's 'en-sahih-international' but 'clearquran-with-tafsir'). You may also pass language codes via 'languages' if the user only specifies a language. Each query must include at least one of languages or translations. Use ayah keys in 'surah:ayah' format (for example '2:255'). In queries[].languages use ISO 639-1 codes (for example 'en', 'ur'), not language names. Do not use 'ar'; Arabic translation is unsupported in this tool.
    Connector
  • Fetches any public web page and returns clean, readable plain text stripped of HTML, navigation, scripts, advertisements, and boilerplate. Returns the page title, meta description, word count, and main body text ready for analysis or summarisation. Use this tool when an agent needs to read the content of a specific web page or article URL — for example to summarise an article, extract facts from a page, verify a claim by reading the source, or convert a web page into plain text to pass to another tool. Pass article URLs returned by web_news_headlines to this tool to read full article content. Do not use this tool to discover current news headlines — use web_news_headlines instead. Does not execute JavaScript — best suited for standard HTML content pages. Will not work with paywalled, login-protected, or JavaScript-rendered single-page applications.
    Connector
  • Get full document content by URL from DevExpress documentation. Use this tool to retrieve the complete markdown content of a specific documentation page. PREREQUISITE: ALWAYS call `devexpress_docs_search` before using this tool to get valid URLs. The URL parameter must be obtained from the results of the `devexpress_docs_search` tool.
    Connector
  • Wait for a pending response from Riley after a convoreply timeout. 🎯 USE THIS TOOL WHEN: convoreply returned a timeout error. This allows you to continue waiting for the response without resending the message. REQUIRES: - session_id: from convoopen response OPTIONAL: - message_id: if known (from convoreply timeout error) - timeout (integer): seconds to wait. For Cursor, use 50 (default). Max 55. Returns the same format as convoreply when successful.
    Connector
  • Starts a crawl job on a website and extracts content from all pages. **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage. **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. **Common mistakes:** Setting limit or maxDiscoveryDepth too high (causes token overflow) or too low (causes missing pages); using crawl for a single page (use scrape instead). Using a /* wildcard is not recommended. **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog." **Usage Example:** ```json { "name": "firecrawl_crawl", "arguments": { "url": "https://example.com/blog/*", "maxDiscoveryDepth": 5, "limit": 20, "allowExternalLinks": false, "deduplicateSimilarURLs": true, "sitemap": "include" } } ``` **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress. **Safe Mode:** Read-only crawling. Webhooks and interactive actions are disabled for security.
    Connector