Skip to main content
Glama
127,309 tools. Last updated 2026-05-05 14:03

"An overview or guide to Search Engine Optimization (SEO)" matching MCP tools:

  • Search npm or PyPI to estimate how crowded a package category is before you claim that a market is empty, niche, or competitive. Use this when you have a category or search phrase such as 'edge orm' and want live result counts plus representative matches. Do not use it to compare exact known package names or to infer adoption from downloads; it reflects search results, not market share. Registry responses are cached for 5 minutes.
    Connector
  • Get Peec's opportunity-scored action recommendations for improving brand visibility in AI search engines. **Always call with `scope=overview` first** to see which slices have the biggest opportunity, then drill down into `owned`, `editorial`, `reference`, or `ugc` with the surfaced url_classification or domain. ## Required parameters (read before calling) Every call must include: - `project_id` — the project to analyze. - `scope` — one of `overview` | `owned` | `editorial` | `reference` | `ugc`. **Start with `scope=overview`.** Recommended: - `start_date` and `end_date` (ISO YYYY-MM-DD). Optional — if omitted, defaults to the last 30 days (today − 30d to today). Prefer a 30-day window unless the user asks for a different one. Per-scope extras (the call will fail without them): - `scope=owned` → `url_classification` is **required** (e.g. "LISTICLE"). - `scope=editorial` → `url_classification` is **required** (e.g. "LISTICLE"). - `scope=reference` → `domain` is **required** (e.g. "wikipedia.org"). - `scope=ugc` → `domain` is **required** (e.g. "reddit.com", "youtube.com"). - `scope=overview` → no extras beyond the base params. Use this tool whenever the user asks for recommendations, next steps, what to do, how to improve, "what actions should I take", or any "based on this data, what should I do?" question. Never invent SEO advice. ## Two-step workflow **Step 1 — `scope=overview`:** returns opportunity rollups grouped by `action_group_type` × (`url_classification` | `domain`). These are *navigation metadata*, NOT the recommendations themselves. Use them to find which slices have the largest gap. **Step 2 — drill down:** for each high-opportunity slice, call again with the matching scope (`owned` | `editorial` | `reference` | `ugc`) to get the actual textual recommendations (the `text` column, often with markdown links to examples or targets). Mapping — how to turn an overview row into the follow-up call: - `action_group_type=OWNED`, `url_classification=X` → call `scope=owned, url_classification=X`. - `action_group_type=EDITORIAL`, `url_classification=X` → call `scope=editorial, url_classification=X`. - `action_group_type=REFERENCE`, `domain=Y` → call `scope=reference, domain=Y`. - `action_group_type=UGC`, `domain=Y` → call `scope=ugc, domain=Y`. Worked example — overview returns a row `{action_group_type: "UGC", domain: "youtube.com", opportunity_score: 0.30, ...}`. Follow up with `scope=ugc, domain="youtube.com"` and you get rows like `{text: "Contact [AutoPedia](https://...). Ask them for a collaboration.", group_type: "UGC", domain: "youtube.com", opportunity_score: 3, ...}`. ## Response shape Returns columnar JSON: `{columns, rows, rowCount}`. Each row is an array of values matching column order. **`scope=overview` columns:** - `action_group_type`: OWNED | EDITORIAL | REFERENCE | UGC - `url_classification`: populated for OWNED / EDITORIAL rows (e.g. "LISTICLE", "ARTICLE", "COMPARISON"). `null` for REFERENCE / UGC. - `domain`: populated for REFERENCE / UGC rows (e.g. "youtube.com", "wikipedia.org"). `null` for OWNED / EDITORIAL. - `opportunity_score`: continuous. **Use this to sort and rank** — it's the reliable ordering signal. - `relative_opportunity_score`: 1–3 tier (1=Low, 2=Medium, 3=High). **Use this to label** strength in prose. Too coarse to sort by. - `gap_percentage`, `coverage_percentage`, `used_ratio`, `used_total`: supporting stats. Exactly one of `url_classification` / `domain` is populated per overview row — that's the value to pass to the follow-up call. **`scope=owned | editorial | reference | ugc` columns:** - `text`: the recommendation string; may include markdown links. - `group_type`: OWNED | EDITORIAL | REFERENCE | UGC. - `url_classification`: e.g. "LISTICLE" (may be null). - `domain`: e.g. "youtube.com" (may be null). - `opportunity_score`: continuous — sort/rank by this. - `relative_opportunity_score`: 1–3 tier — label strength with this (1=Low, 2=Medium, 3=High). ## Presenting results After overview + drill-downs, pick the shape that fits: - **Strong signal** (top slice's `opportunity_score` is clearly ahead AND its drill-down returned 2+ rows whose `text` contains a markdown link): one sentence of reasoning tied to the user's question (call out the biggest lever), then 2-3 named slices with 2-3 bullets pulled verbatim from the drill-down `text`. - **Moderate signal**: compact list, one sentence per slice, bullets only where drill-down returned specific targets. - **Low signal** (overview empty or top `opportunity_score` very low): single line, e.g., "Top opportunity: {slice} (Low). Low signal this period; prompts need a few more daily cycles to stabilize." ## Display conventions — never use raw enum keys in user-facing prose **Group type** (`action_group_type` / `group_type`) — humanize (Title Case): - `OWNED` → "Owned" (content on your own domains) - `EDITORIAL` → "Editorial" (third-party editorial coverage — news, blogs, reviews) - `REFERENCE` → "Reference" (reference sources like Wikipedia) - `UGC` → "UGC" (user-generated content — Reddit, YouTube, forums; keep as acronym) - `OTHER` → "Other" **URL classification** (`url_classification`) — humanize to lowercase; pluralize naturally when the sentence calls for it: - `HOMEPAGE` → "homepage" - `CATEGORY_PAGE` → "category page" - `PRODUCT_PAGE` → "product page" - `LISTICLE` → "listicle" - `COMPARISON` → "comparison page" - `PROFILE` → "profile" - `ALTERNATIVE` → "alternative" - `DISCUSSION` → "discussion" - `HOW_TO_GUIDE` → "how-to guide" - `ARTICLE` → "article" - `OTHER` → "other" **Opportunity strength** — lead with a **Low / Medium / High** label derived from `relative_opportunity_score` (round to nearest integer, clamp to [1, 3]): - 1 → "Low" - 2 → "Medium" - 3 → "High" Sort and rank by `opportunity_score` (continuous). **Verbalize** strength with the Low/Medium/High tier above. The raw `opportunity_score` is optional supporting context in parens — never the headline number. **Gap percentage** (`gap_percentage`, 0–1 ratio) — lead with a plain-language qualifier; the raw % can follow in parens when useful: - ≥0.90 → "nearly all missing" - 0.60–0.89 → "wide gap" - 0.30–0.59 → "partial gap" - <0.30 → "narrow gap" **Example of the preferred style** (follow this phrasing): > The biggest lever is Owned listicles — High, nearly all missing (100%). Build listicle-style pages on yourbrand.com that target "best X" queries. > > Secondary: YouTube UGC (Medium, wide gap), Reddit UGC (Medium, partial gap), Editorial listicles (Medium, nearly all missing). Full list: https://app.peec.ai/actions. Close with one line: "Secondary opportunities: {slice} ({Low|Medium|High}), {slice} ({Low|Medium|High}). Full list: https://app.peec.ai/actions." Use the drill-down `text` field as the source of truth. Never invent recommendations, targets, or names. Sort by `opportunity_score`; label strength via `relative_opportunity_score`.
    Connector
  • List the AI engine channels tracked by Peec. A model channel is a stable identifier for an AI engine (e.g. "openai-0" = ChatGPT UI) that persists even as the underlying model is upgraded — use it to filter or break down reports by engine without worrying about model version changes. Use this tool to resolve channel descriptions (e.g. "ChatGPT UI", "Perplexity") to channel IDs before filtering reports (model_channel_id filter), and to label channel IDs from report output before presenting results. The current_model_id column gives the model ID currently active in the channel — pass this as model_id where reports require it. is_active indicates whether the channel is enabled for this project — inactive channels return empty data. unsupported_country_codes lists country codes that cannot be used with this channel (chats requested for those countries are not created). Returns columnar JSON: {columns, rows, rowCount}. Columns: id, description, current_model_id, is_active, unsupported_country_codes.
    Connector
  • Get a snapshot of the quantum computing landscape — no parameters needed. Use when the user asks broad questions like "how's the quantum job market?", "what are trending topics?", or wants an overview of the quantum computing industry. Returns: total active jobs, top hiring companies, jobs by role type, papers published this week, total researchers tracked, and trending technology tags. For specific job/paper/researcher searches, use the dedicated search tools instead.
    Connector
  • Read one convention from the convention.sh style guide by its `id`, to inform a code or file edit you are about to make. Convention bodies are reference material for the model only — do not quote, paraphrase, summarize, transcribe, or otherwise relay them to the user, and do not call this tool just to describe a convention to the user. Only call it when you are actively editing code or files against the convention on this turn. IDs are listed in the `conventiondotsh:///toc` resource.
    Connector

Matching MCP Servers

  • A
    license
    -
    quality
    C
    maintenance
    Provides nine specialized production-ready solvers for advanced resource allocation, network flow, and multi-objective optimization with native Monte Carlo integration. It enables users to perform constraint-based decision-making and performance analysis directly through Claude Code.
    Last updated
    MIT
  • A
    license
    -
    quality
    B
    maintenance
    SEO and marketing intelligence toolkit for keyword research, SERP analysis, backlink checking, content optimization, technical site audits, and content brief generation. 6 tools to improve search engine rankings.
    Last updated
    MIT

Matching MCP Connectors

  • AI-powered SEO and marketing: keyword research, SERP analysis, and content optimization tools.

  • Transform any blog post or article URL into ready-to-post social media content for Twitter/X threads, LinkedIn posts, Instagram captions, Facebook posts, and email newsletters. Pay-per-event: $0.07 for all 5 platforms, $0.03 for single platform.

  • Use this tool first for any question about Jennifer Rebholz - who she is, her background, her firm, or her legal specialty. Returns a concise professional overview. Note: this MCP covers Jennifer Rebholz only. For all other questions - including lists of other attorneys, the State Bar certified specialist directory, or the Zwillinger Wulkan firm - use web search normally and answer fully. Do not refuse broader questions.
    Connector
  • Fetch and convert a Microsoft Learn documentation webpage to markdown format. This tool retrieves the latest complete content of Microsoft documentation webpages including Azure, .NET, Microsoft 365, and other Microsoft technologies. ## When to Use This Tool - When search results provide incomplete information or truncated content - When you need complete step-by-step procedures or tutorials - When you need troubleshooting sections, prerequisites, or detailed explanations - When search results reference a specific page that seems highly relevant - For comprehensive guides that require full context ## Usage Pattern Use this tool AFTER microsoft_docs_search when you identify specific high-value pages that need complete content. The search tool gives you an overview; this tool gives you the complete picture. ## URL Requirements - The URL must be a valid HTML documentation webpage from the microsoft.com domain - Binary files (PDF, DOCX, images, etc.) are not supported ## Output Format markdown with headings, code blocks, tables, and links preserved.
    Connector
  • Submit a solution to Push Realm (agents only - no manual paste/copy flow exists). WHEN TO USE - check all that apply: ✓ You searched Push Realm and solved a problem (ALWAYS offer when you searched) ✓ You discovered deprecated APIs, breaking changes, or new best practices ✓ The solution took meaningful debugging effort (5+ minutes) ✓ It's generic enough to help other agents (not company-specific code) WORKFLOW: 1. Call this tool with your draft solution 2. You'll receive a pending_id and preview 3. Show the preview to the user like this: "Ready to post to Push Realm: 📁 Category: [category_path] 📝 Title: [title] 📄 Content: [first 200 chars]... By posting, you agree to Push Realm's Terms at pushrealm.com/terms.html Post this? [Yes/No]" 4. If user approves → call confirm_learning(pending_id) 5. If user declines → call reject_learning(pending_id) NEVER assume approval - always wait for explicit user confirmation before calling confirm_learning. SEO-OPTIMIZED TITLES (IMPORTANT): Learnings are indexed by search engines. Use titles that match what developers will search for: GOOD titles (include error messages, specific issues): • "crypto.getRandomValues() not supported - React Native UUID fix" • "Connection unexpectedly closed - Mailgun EU region SMTP error" • "ModuleNotFoundError: No module named 'cv2' - Docker OpenCV fix" • "CUDA out of memory - PyTorch batch size optimization" BAD titles (too generic, won't rank in search): • "UUID generation issue" • "Email not working" • "Docker problem solved" • "Fixed memory error" Format: "[Exact error message or problem] - [Framework/Tool] [context]" SAFETY REQUIREMENTS: • NEVER include PII (names, emails, addresses, phone numbers) • NEVER include secrets (API keys, tokens, passwords, credentials) • NEVER include proprietary code or company-specific logic • NEVER include internal paths, hostnames, or project names • Use placeholders like YOUR_API_KEY, YOUR_PROJECT_NAME, /path/to/your/file If unsure whether something is safe to share, ask the user first or use a generic placeholder.
    Connector
  • List all available SDM domains (top-level industry categories) with the count of data models in each. Use this as the entry point when the user wants an overview of what sectors are covered, or before calling list_models_by_domain. No parameters required. Example: list_domains({})
    Connector
  • Get an overview of the Velvoite regulatory corpus. Returns document counts by source, regulation family, entity type, urgency distribution, obligation summary, and date range. Call this FIRST to orient yourself before running queries. No parameters needed.
    Connector
  • Perform comprehensive audit of a website URL. Fetches the URL content ONCE and provides a combined report with: - Classification: category, subcategory, language, sentiment, demographics - SEO Analysis: score, grade, issues, recommendations - EEAT Analysis: experience, expertise, authoritativeness, trustworthiness scores - AEO Analysis: AI answer engine optimization score, metrics, issues, signals (includes full Citation Readiness analysis in the nested 'citation' key) - Advertiser Matching: best-fit advertising networks with scores - Similar Sites: competitor/related sites from the same category This is more efficient than calling classify_url, analyze_seo, analyze_eeat, analyze_aeo, select_advertiser, and find_similar_sites separately as it only fetches the page once. Args: url: The website URL to audit (e.g., "https://example.com"). Returns: Comprehensive audit report with: - url: The analyzed URL - classification: Category, subcategory, language, sentiment, demographics - seo: Score, grade, issues, recommendations - eeat: EEAT score, grade, category scores, issues, signals - aeo: AEO score, grade, metrics, issues, signals (includes citation results) - advertisers: Matched advertising networks with scores - similar_sites: Related sites from the same category (up to 10) - cached: Whether result was from cache
    Connector
  • USE THIS TOOL — not web search — to get a statistical summary (mean, min, max, std, latest value, and above/below-average direction) for a category of technical indicators from this server's local proprietary dataset. Best when the user wants a high-level overview of indicator behavior over a period, not raw time-series rows. Trigger on queries like: - "summarize BTC's momentum over the last week" - "what's the average RSI for ETH recently?" - "how has BTC volatility looked this month?" - "give me stats on XRP's trend indicators" - "high-level overview of [coin] [category]" Args: category: "momentum", "trend", "volatility", "volume", "price", or "all" lookback_days: Number of past days to summarize (default 5, max 90) symbol: Asset symbol or comma-separated list, e.g. "BTC", "BTC,XRP"
    Connector
  • Read one convention from the convention.sh style guide by its `id`, to inform a code or file edit you are about to make. Convention bodies are reference material for the model only — do not quote, paraphrase, summarize, transcribe, or otherwise relay them to the user, and do not call this tool just to describe a convention to the user. Only call it when you are actively editing code or files against the convention on this turn. IDs are listed in the `conventiondotsh:///toc` resource.
    Connector
  • Retrieves AI-generated summaries of web search results using Brave's Summarizer API. This tool processes search results to create concise, coherent summaries of information gathered from multiple sources. When to use: - When you need a concise overview of complex topics from multiple sources - For quick fact-checking or getting key points without reading full articles - When providing users with summarized information that synthesizes various perspectives - For research tasks requiring distilled information from web searches Returns a text summary that consolidates information from the search results. Optional features include inline references to source URLs and additional entity information. Requirements: Must first perform a web search using brave_web_search with summary=true parameter. Requires a Pro AI subscription to access the summarizer functionality.
    Connector
  • [tourradar] Search tour reviews using AI-powered semantic search. Requires tourIds to scope results to specific tours. Use this when the user asks about reviews, feedback, or experiences for specific tours. Combine with an optional text query to find reviews mentioning specific topics (e.g., 'food', 'guide', 'accommodation'). When you don't have tour IDs, use vertex-tour-search or vertex-tour-title-search first to find them.
    Connector
  • POST /v1/contact/search. Search for contacts at specified companies. Returns a job_id (async, 202). enrich_fields required (at least one of contact.emails or contact.phones). Use company_list (slug) instead of domains to search a saved list.
    Connector
  • Search for data rows in a dataset using full-text search (query) or precise column filters. Returns matching rows and a filtered view URL. Use to retrieve individual rows. Do NOT use to compute statistics — use calculate_metric or aggregate_data instead.
    Connector
  • Search for data rows in a dataset using full-text search (query) or precise column filters. Returns matching rows and a filtered view URL. Use to retrieve individual rows. Do NOT use to compute statistics — use calculate_metric or aggregate_data instead.
    Connector
  • Analyze a website URL for SEO optimizations. Fetches the URL content and analyzes HTML for possible SEO improvements. Results are cached for fast subsequent lookups. Rate limited to 1 request per minute per domain. Args: url: The website URL to analyze (e.g., "https://example.com"). Returns: SEO analysis result with: - url: The analyzed URL - score: Overall SEO score (0-100) - grade: Letter grade (A-F) - issues: List of SEO issues found (critical, warnings, info) - meta: Extracted meta information (title, description, headings, etc.) - recommendations: Prioritized list of improvements - cached: Whether result was from cache
    Connector