Skip to main content
Glama
127,264 tools. Last updated 2026-05-05 13:07

"A tool or website for finding website backlinks" matching MCP tools:

  • Permanently delete a published website. The site will be immediately inaccessible. Requires authentication via edit_key or api_key, and requires confirm: true as a safety mechanism to prevent accidental deletion. Use this when a user explicitly asks you to remove or delete a site. IMPORTANT: Always confirm with the user before calling this tool — deletion cannot be undone.
    Connector
  • Permanently delete a published website. The site will be immediately inaccessible. Requires authentication via edit_key or api_key, and requires confirm: true as a safety mechanism to prevent accidental deletion. Use this when a user explicitly asks you to remove or delete a site. IMPORTANT: Always confirm with the user before calling this tool — deletion cannot be undone.
    Connector
  • Lists every automation configured on a perspective with its trigger, channel (sensitive details redacted), execution mode, enabled state, schedule description, and recent error/success metadata. Behavior: - Read-only. - Errors when the perspective is not found or you do not have access. - Sensitive parts of channel delivery (e.g., webhook auth headers, full URLs) are redacted before being returned. - has_error / last_error / last_error_at / failure_count appear only when there have been recent failures. When to use this tool: - Auditing what's wired up on a perspective before adding more automations. - Finding an automation_id to feed into automation_update, automation_delete, or automation_test. - Diagnosing a failing automation via last_error / failure_count. When NOT to use this tool: - Creating a new automation — use automation_create. - Toggling enabled or changing config — use automation_update. - Verifying delivery actually works — use automation_test.
    Connector
  • Get the full intelligence profile for a brand by its URL slug. Args: slug: URL-safe brand identifier (e.g. "pacvue", "hubspot", "snowflake"). Use search_brands to discover slugs if unsure. Returns: Full brand profile including company overview (3 paragraphs), signal summary, structured FAQs, vertical, tier/rank, website, tags, and source URL. Returns an error dict if the brand is not found.
    Connector
  • Given a product ID, find similar products across the entire catalog. Useful for "more like this" recommendations or finding alternatives. Returns compact product cards, not full variant detail; call get_product for SKU-level variants, exact variant prices, merchant description, store info, and all images. Returns page and hasNextPage. Returns up to 10 results per page, paginated (max 3 pages).
    Connector
  • Explicitly request a synthesis contract for a named 3D object. Use this tool when generate_r3f_code returns status SYNTHESIS_REQUIRED, or to pre-generate geometry constraints before calling generate_r3f_code. Complexity tiers: low — 4 to 7 parts. Only Box, Sphere, Cylinder geometries. Best for: mobile banners, thumbnails, low-end devices. medium — 10 to 20 parts. Adds Capsule and Torus geometries. Best for: website sections, embedded widgets, tablets. high — 28+ parts. All geometries. Full emissive detail. Best for: hero sections, desktop showcase, ad campaigns. If target is set to "mobile" and complexity is not explicitly provided, complexity defaults to "low" automatically. This tool does NOT generate geometry. It returns the synthesis_contract with constraints calibrated to the requested complexity tier. The LLM generates the actual JSX and passes it to generate_r3f_code via synthesized_components.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Improve security writing, score it against rubrics, plan IR and product strategy.

  • Public remote MCP server for Walnai AI Consulting services, pricing, calculator, FAQs, and adoption.

  • Add a document to a deal's data room. Creates the deal if needed. This is the primary way to get documents into Sieve for screening. Upload a pitch deck, financials, or any document -- then call sieve_screen to analyze everything in the data room. Provide company_name to create a new deal (or find existing), or deal_id to add to an existing deal. Provide exactly one content source: file_path (local file), text (raw text/markdown), or url (fetch from URL). Args: title: Document title (e.g. "Pitch Deck Q1 2026"). company_name: Company name -- creates deal if new, finds existing if not. deal_id: Add to an existing deal (from sieve_deals or previous sieve_dataroom_add). website_url: Company website URL (used when creating a new deal). document_type: Type: 'pitch_deck', 'financials', 'legal', or 'other'. file_path: Path to a local file (PDF, DOCX, XLSX). The tool reads and uploads it. text: Raw text or markdown content (alternative to file). url: URL to fetch document from (alternative to file).
    Connector
  • Lists every workspace the user can access, with workspace_id, uniqueName (slug), and display name. Behavior: - Read-only. Page size 20, sorted by name. Pass nextCursor back as cursor to fetch the next page. - Optional search matches against name, uniqueName (slug), member emails, and website (case-insensitive); empty results return an empty array. - Other perspective tools accept either workspace_id or uniqueName interchangeably. - Returns description for each workspace — use it to match the right workspace based on context. - Does NOT mark which workspace is the caller's default — call workspace_get_default once and compare ids client-side if you need to highlight it. When to use this tool: - The user names a specific workspace and you need its workspace_id (filter with search). - Showing the user the full set of workspaces they can pick from. When NOT to use this tool: - You just need the user's default workspace — use workspace_get_default. - You already have a workspace_id and want details — use workspace_get.
    Connector
  • ALWAYS call this tool at the start of every conversation where you will build or modify a WebsitePublisher website. Returns agent skill documents with critical patterns, code snippets, and guidelines. Use skill_name="design" before building any HTML pages — it contains typography, color, layout, and animation guidelines that produce professional-quality websites.
    Connector
  • Create a new Kochava FAA (Free App Analytics) account. IMPORTANT: The user MUST explicitly agree to the FAA Terms of Service before account creation. If tos_agreed is False, this tool will return the TOS link and stop — do NOT submit the form. Call kochava_free_app_analytics_get_tos() to retrieve and present the TOS to the user first, then call this tool again with tos_agreed=True once the user confirms agreement. DISPLAY INSTRUCTIONS: When this tool returns a successful response, you MUST display the 'next_steps' field content to the user EXACTLY as written — word-for-word, preserving ALL text, formatting, line breaks, numbering, and bullet points. Do NOT summarize, rephrase, reword, or omit any part of the 'next_steps' content. Every sentence must be shown to the user as-is. FAA Terms of Service: https://s34035.pcdn.co/wp-content/uploads/2023/08/FAA-Web-Sign-Up-TOS-8-15-23.pdf Example (after user reviews and agrees to TOS): kochava_free_app_analytics_create_acc_and_get_auth_key( first_name="Jane", last_name="Smith", email_address="jane@example.com", phone_number="5551234567", company="Acme Corp", website="www.acme.com", company_address_line_1="123 Main St", company_city="Sandpoint", company_region="Idaho", company_postal_code="83864", country="United States", tos_agreed=True )
    Connector
  • Perform comprehensive audit of a website URL. Fetches the URL content ONCE and provides a combined report with: - Classification: category, subcategory, language, sentiment, demographics - SEO Analysis: score, grade, issues, recommendations - EEAT Analysis: experience, expertise, authoritativeness, trustworthiness scores - AEO Analysis: AI answer engine optimization score, metrics, issues, signals (includes full Citation Readiness analysis in the nested 'citation' key) - Advertiser Matching: best-fit advertising networks with scores - Similar Sites: competitor/related sites from the same category This is more efficient than calling classify_url, analyze_seo, analyze_eeat, analyze_aeo, select_advertiser, and find_similar_sites separately as it only fetches the page once. Args: url: The website URL to audit (e.g., "https://example.com"). Returns: Comprehensive audit report with: - url: The analyzed URL - classification: Category, subcategory, language, sentiment, demographics - seo: Score, grade, issues, recommendations - eeat: EEAT score, grade, category scores, issues, signals - aeo: AEO score, grade, metrics, issues, signals (includes citation results) - advertisers: Matched advertising networks with scores - similar_sites: Related sites from the same category (up to 10) - cached: Whether result was from cache
    Connector
  • List all projects the authenticated user has access to. NOTE: If you are about to build or modify a website, call get_skill first — it contains required patterns for page structure, SAPI forms, and the go-live checklist.
    Connector
  • Save works extracted from a website import after the artist has confirmed them. Call this after presenting import_from_website results and receiving artist approval. Creates the works, triggers auto-provenance, and imports images from the website in one operation. Set skip: true for any works the artist wants to exclude (duplicates, unwanted). Pass artist-corrected values for any fields the artist edited during review. Use get_profile to obtain artist_id — never ask the user for it. After success, ask if they'd like to see any of the imported works — then call get_work to show the visual card.
    Connector
  • Starts a crawl job on a website and extracts content from all pages. **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage. **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. **Common mistakes:** Setting limit or maxDiscoveryDepth too high (causes token overflow) or too low (causes missing pages); using crawl for a single page (use scrape instead). Using a /* wildcard is not recommended. **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog." **Usage Example:** ```json { "name": "firecrawl_crawl", "arguments": { "url": "https://example.com/blog/*", "maxDiscoveryDepth": 5, "limit": 20, "allowExternalLinks": false, "deduplicateSimilarURLs": true, "sitemap": "include" } } ``` **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress. **Safe Mode:** Read-only crawling. Webhooks and interactive actions are disabled for security.
    Connector
  • Analyze a website URL for SEO optimizations. Fetches the URL content and analyzes HTML for possible SEO improvements. Results are cached for fast subsequent lookups. Rate limited to 1 request per minute per domain. Args: url: The website URL to analyze (e.g., "https://example.com"). Returns: SEO analysis result with: - url: The analyzed URL - score: Overall SEO score (0-100) - grade: Letter grade (A-F) - issues: List of SEO issues found (critical, warnings, info) - meta: Extracted meta information (title, description, headings, etc.) - recommendations: Prioritized list of improvements - cached: Whether result was from cache
    Connector
  • Detect website technology stack: CMS, frameworks, CDN, analytics tools, web servers, languages (via HTTP headers + HTML analysis). Use for passive reconnaissance; for full audit use audit_domain. Free: 100/hr, Pro: 1000/hr. Returns {technologies: [{name, category, confidence%, version}]}.
    Connector
  • Fetch a web page and return its content as text, Markdown, or HTML. Includes rate limiting (2s per domain, max 10 req/min) for legal compliance. Automatically handles HTML-to-text conversion. Max response size: 1MB. Use for OEM verification and manufacturer website scraping.
    Connector
  • Get a multi-day weather forecast for any Swiss location. Returns daily summaries with temperature, precipitation, and weather icons. This uses official MeteoSwiss Open Data — the same forecasts powering the MeteoSwiss app and website. Accepts: - Postal codes: "8001" (Zurich), "3000" (Bern), "1200" (Geneva) - Station abbreviations: "ZUE" (Zurich Fluntern), "BER" (Bern) - Place names: "Zurich", "Basel", "Lugano" Coverage: ~6000 Swiss locations (all postal codes + weather stations + mountain points). Forecast horizon: up to 9 days. Updated hourly.
    Connector
  • Perform live HTTP GET and analyze security headers: CSP, HSTS, X-Frame-Options, X-Content-Type-Options, Permissions-Policy, Referrer-Policy. Use to audit live website headers; use check_headers to validate headers you already have. Free: 100/hr, Pro: 1000/hr. By default header values are truncated to 500 chars (CSP can exceed 4 KB on large sites); pass include='full' for the full raw value. Returns {headers_present, headers_missing, findings, total_score}.
    Connector
  • Search or fetch posts from the MetaMask Embedded Wallets community forum (builder.metamask.io). Use for troubleshooting real user issues, finding workarounds, and checking if an issue is known. Provide a query to search or a topic_id to read the full discussion.
    Connector