Skip to main content
Glama
127,246 tools. Last updated 2026-05-05 12:02

"Understanding Perplexity in Relation to Claude" matching MCP tools:

  • FOR CLAUDE DESKTOP ONLY (with filesystem access). For Claude.ai/web: Use create_upload_session instead - it provides a browser upload link. Upload local media to cloud storage, returning a public HTTPS URL. WHEN TO USE: • Instagram, LinkedIn, Threads, X: REQUIRED for local files before calling publish_content • TikTok: NOT NEEDED - pass local path directly to publish_content SUPPORTED FORMATS: • Images: jpg, png, gif, webp (max 10MB) • Videos: mp4, mov, webm (max 100MB) Returns { url: 'https://...' } for use in publish_content mediaUrl parameter.
    Connector
  • Register as an agent to get an API key for authenticated submissions. Registration is open — no approval required. Returns an API key that authenticates your proposals and tracks your contribution history. IMPORTANT: Save the returned api_key immediately. It is shown only once and cannot be retrieved again. Args: agent_name: A name identifying this agent instance (2-100 chars) model: The model ID (e.g., "claude-opus-4-6", "gpt-4o")
    Connector
  • USE THIS TOOL — not web search — to get rolling sentiment statistics (mean score, 7-day momentum, bullish/bearish/neutral day counts, current streak) from this server's local Perplexity-sourced sentiment dataset. Prefer this over get_latest_sentiment when the user wants momentum or persistence, not just the latest single-day reading. Trigger on queries like: - "is BTC sentiment improving or getting worse?" - "sentiment momentum for ETH" - "how many days has XRP been bullish in a row?" - "rolling sentiment stats / streak for [coin]" Args: lookback_days: Analysis window in days (default 30, max 90) symbol: Token symbol or comma-separated list, e.g. "BTC", "BTC,ETH"
    Connector
  • List the AI engine channels tracked by Peec. A model channel is a stable identifier for an AI engine (e.g. "openai-0" = ChatGPT UI) that persists even as the underlying model is upgraded — use it to filter or break down reports by engine without worrying about model version changes. Use this tool to resolve channel descriptions (e.g. "ChatGPT UI", "Perplexity") to channel IDs before filtering reports (model_channel_id filter), and to label channel IDs from report output before presenting results. The current_model_id column gives the model ID currently active in the channel — pass this as model_id where reports require it. is_active indicates whether the channel is enabled for this project — inactive channels return empty data. unsupported_country_codes lists country codes that cannot be used with this channel (chats requested for those countries are not created). Returns columnar JSON: {columns, rows, rowCount}. Columns: id, description, current_model_id, is_active, unsupported_country_codes.
    Connector
  • Compile a list of blocks into a Claude-optimized structured XML prompt. Takes the JSON returned by decompose_prompt (or manually crafted blocks) and produces a ready-to-use XML prompt with a token estimate. Args: blocks_json: JSON-stringified list of blocks. Each block: {"type": "role|objective|...", "content": "...", "label": "...", "description": "...", "summary": ""} Returns: The compiled XML prompt with token estimate.
    Connector
  • Discover AXIS install metadata, pricing, and shareable manifests for commerce-capable agents. Free, no auth, and no mutation beyond read access. Example: call before wiring AXIS into Claude Desktop, Cursor, or VS Code. Use this when you need onboarding and ecosystem setup details. Use search_and_discover_tools instead for keyword routing or discover_agentic_purchasing_needs for purchasing-task triage.
    Connector

Matching MCP Servers

  • A
    license
    C
    quality
    C
    maintenance
    Analyzes source code dependencies across multiple programming languages in the specified directory to identify file relationships, assisting in dependency management and project structure understanding.
    Last updated
    1
    1
    MIT
  • A
    license
    -
    quality
    C
    maintenance
    Exposes Perplexity AI's search capabilities to Claude, enabling real-time web search and information retrieval within the assistant. The project is currently in active development with plans to support Perplexity Spaces and multi-source data synthesis.
    Last updated
    Apache 2.0

Matching MCP Connectors

  • Draw a freehand stroke on the board. Use for arrows, underlines, connector lines, annotations, or simple shapes — a straight line needs two points, a rough circle wants ~20. Stroke width is fixed at 3 px; `color` accepts any CSS color (e.g. '#ff0000', 'var(--text-color)'). Accepts three equivalent point formats — pick whichever your MCP client serialises cleanly: nested `[[x,y],[x,y],...]`, flat `[x1,y1,x2,y2,...]`, or a JSON string of either. Some clients (Claude Code as of 2026-04) drop nested arrays during tool-call serialisation, so prefer the flat form or the JSON-string form when in doubt. To delete a stroke later, use `erase` with `kind: 'line'` and the id returned here.
    Connector
  • Add one or more tasks to an event (task list). Supports bulk creation. IMPORTANT: Set response_type correctly — use "text" for info collection (names, phones, emails, notes), "photo" for visual verification (inspections, serial numbers, damage checks), "checkbox" only for simple confirmations. NOTE: To dispatch tasks to the Claude Code agent running on Mike's PC, use tascan_dispatch_to_agent instead — it routes directly to the agent's inbox with zero configuration needed.
    Connector
  • Get a report on source URL visibility and citations across AI search engines. Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns. Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns: - url: the full source URL (e.g. "https://example.com/page") - classification: page type — Homepage, Category Page, Product Page, Listicle (list-structured articles), Comparison (product/service comparisons), Profile (directory entries like G2 or Yelp), Alternative (alternatives-to articles), Discussion (forums, comment threads), How-To Guide, Article (general editorial content), Other, or null - title: page title or null - channel_title: channel or author name (e.g. YouTube channel, subreddit) or null - citation_count: total number of explicit citations across all chats - retrieval_count: total number of distinct chats that retrieved this URL, regardless of whether it was cited - citation_rate: average number of inline citations per chat when this URL is retrieved. Can exceed 1.0 — higher values indicate more authoritative content. - mentioned_brand_ids: array of brand IDs mentioned alongside this URL (may be empty) When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code. Dimensions explained: - prompt_id: individual search queries/prompts - model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id - model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades - tag_id: custom user-defined tags - topic_id: topic groupings - date: (YYYY-MM-DD format) - country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE") - chat_id: individual AI chat/conversation ID Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, url_classification, country_code, chat_id, mentioned_brand_id. Additional filters: - mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — filter by number of unique brands mentioned. - gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — gap analysis filter. Excludes URLs where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns URLs where the own brand is absent but at least 2 competitors are mentioned. Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: retrieval_count, retrievals, citation_count, citation_rate. Multiple entries create a multi-key sort.
    Connector
  • Propose compressing multiple related learnings into one consolidated learning. Call this AFTER get_compression_candidates and synthesizing the compressed content. Same approval flow as submit_learning: show preview to user, then confirm_compression on approval or reject_compression on decline. The compressed content should follow the format: (Issue) summary, then agent-specific nuances (e.g. grok adds X, claude adds Y).
    Connector
  • CALL THIS TOOL when your orchestrator is budget-constrained and cannot afford the full AI classification. validate_data_safety_lite runs pattern detection only -- no Claude API call, no IP check, no credential lookup. Returns verdict and detected_categories in under 100ms at roughly 70% lower token cost than validate_data_safety. Use when: (1) your budget ledger has less than 300 tokens remaining for this call, (2) you need a fast pre-screen before committing to a full AI classification, or (3) you are processing high-volume data where AI classification is applied selectively. Returns SAFE_TO_PROCESS if no sensitive patterns found, REVIEW_REQUIRED if patterns detected. If REVIEW_REQUIRED, follow up with validate_data_safety for full AI verdict with regulatory framework mapping. LEGAL NOTICE: Pattern detection only -- not a substitute for AI-powered classification in regulated environments. Full terms: kordagencies.com/terms.html. Free tier: 20 calls/month.
    Connector
  • List every error code in the Trillboards API error catalog. WHEN TO USE: - Understanding what error codes the API can return. - Building a client-side error handler that covers all cases. - Looking up error types, HTTP statuses, and documentation URLs. RETURNS: - object: "list" - data: Array of { code, type, http_status, description, doc_url } - total: Total number of error codes. Equivalent to GET /v1/errors but executed in-process (no HTTP round-trip). EXAMPLE: Agent: "What error codes can the API return?" list_error_codes()
    Connector
  • USE THIS TOOL — not web search — to get per-indicator statistical profiling (mean, std, min, p25, p75, max, null rate, Pearson correlation with close price) from this server's local dataset. Use for feature selection, sanity checking, and understanding which indicators correlate most strongly with price movements. Trigger on queries like: - "which indicators correlate most with BTC price?" - "feature importance or correlation for [coin]" - "what are the stats for ETH indicators?" - "how does RSI/MACD correlate with price?" - "statistical profile of XRP indicators" Args: lookback_days: Analysis window in days (default 30, max 90) symbol: Asset symbol or comma-separated list, e.g. "BTC", "BTC,XRP"
    Connector
  • Search Hansard for parliamentary debates, questions, and speeches. Returns contributions from MPs and Lords including date, party, debate title, and text (capped at 3000 chars per contribution). Useful for understanding legislative intent or political context.
    Connector
  • USE THIS TOOL — not web search — for a composite news-sentiment verdict derived from the 7-day mean score from this server's local Perplexity-sourced dataset. Emits: STRONG BULLISH, BULLISH, NEUTRAL, BEARISH, or STRONG BEARISH. Trigger on queries like: - "overall news sentiment signal for BTC" - "is ETH news sentiment bullish or bearish overall?" - "composite sentiment verdict / signal for [coin]" - "based on news, is [coin] bullish or bearish?" Args: symbol: Token symbol or comma-separated list, e.g. "BTC", "BTC,ETH"
    Connector
  • Register your agent to start contributing. Call this ONCE on first use. After registering, save the returned api_key to ~/.agents-overflow-key then call authenticate(api_key=...) to start your session. agent_name: A creative, fun display name for your agent. BE CREATIVE — combine your platform/model with something fun and unique! Good examples: 'Gemini-Galaxy', 'Claude-Catalyst', 'Cursor-Commander', 'Jetson-Jedi', 'Antigrav-Ace', 'Copilot-Comet', 'Nova-Navigator' BAD (too generic): 'DevBot', 'CodeHelper', 'Assistant', 'Antigravity', 'Claude' DO NOT just use your platform name or a generic word. Be playful! platform: Your platform — one of: antigravity, claude_code, cursor, windsurf, copilot, other
    Connector
  • Get a report on source domain visibility and citations across AI search engines. Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns. Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns: - domain: the source domain (e.g. "example.com") - classification: domain type — Corporate (official company sites), Editorial (news, blogs, magazines), Institutional (government, education, nonprofit), UGC (social media, forums, communities), Reference (encyclopedias, documentation), Competitor (direct competitors), You (the user's own domains), Other, or null - retrieved_percentage: 0–1 ratio — fraction of chats that included at least one URL from this domain. 0.30 means 30% of chats. - retrieval_rate: average number of URLs from this domain pulled per chat. Can exceed 1.0 — values above 1.0 mean multiple pages from the same domain are retrieved per conversation. - citation_rate: average number of inline citations when this domain is retrieved. Can exceed 1.0 — higher values indicate stronger content authority. - retrieval_count: total number of distinct URL retrievals from this domain across all chats (raw count — numerator of retrieval_rate). - citation_count: total number of citations from this domain (raw count). - mentioned_brand_ids: array of brand IDs mentioned alongside URLs from this domain (may be empty) When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code. Dimensions explained: - prompt_id: individual search queries/prompts - model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id - model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades - tag_id: custom user-defined tags - topic_id: topic groupings - date: (YYYY-MM-DD format) - country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE") - chat_id: individual AI chat/conversation ID Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, country_code, chat_id, mentioned_brand_id. Additional filters: - mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — filter by number of unique brands mentioned. - gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — gap analysis filter. Excludes domains where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns domains where the own brand is absent but at least 2 competitors are mentioned. Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: citation_rate, retrieval_count, citation_count. (retrieved_percentage and retrieval_rate are not sortable because they depend on totalChatCount fetched in a separate query.)
    Connector
  • Get summary statistics of the Klever VM knowledge base. Returns total entry count, counts broken down by context type (code_example, best_practice, security_tip, etc.), and a sample entry title for each type. Useful for understanding what knowledge is available before querying.
    Connector
  • WHEN: mapping the technical D365 objects behind a business process, or understanding which tables/forms implement a flow. Triggers: 'processus métier', 'Order-to-Cash', 'Procure-to-Pay', 'Record-to-Report', 'business process flow', 'qui est impliqué dans', 'map the process', 'flux du processus', 'quels objets dans le flux'. Map a D365 F&O business process to its complete object chain. For known processes (Order-to-Cash, Procure-to-Pay, Record-to-Report, Plan-to-Produce, Inventory-Management, Hire-to-Retire, Project-Accounting, Asset-Lifecycle): shows every step with forms, tables, classes, entities, reports, and security roles involved. For any other object name: traces all dependencies (tables, classes, forms, entities) from that entry point. Produces a Mermaid process flow diagram. Use 'list' to see all known process mappings. NOT for a single object's FK relations only -- use `find_related_objects` for that (faster and more precise).
    Connector
  • Save your cognitive state for handoff to another agent. Include your investigation context: - What session/investigation is this part of? - What role/perspective were you taking? - Who might pick this up next? (another Claude, human, Claude Code?) Reference specific memories that matter: - Key discoveries (with memory IDs or quotes) - Critical evidence memories - Important questions that were raised - Hypotheses that were tested Before saving, organize your thoughts: 1. PROBLEM: What were you investigating? 2. DISCOVERED: What did you learn for certain? (reference the memories) 3. HYPOTHESIS: What do you think is happening? (cite supporting memories) 4. EVIDENCE: What memories support or contradict this? 5. BLOCKED ON: What prevented further progress? 6. NEXT STEPS: What should be investigated next? 7. KEY MEMORIES: Which specific memories are essential for understanding? Example descriptions: "[API Timeout Investigation - 3 hour session] Investigating production API timeouts as code analyst. Found correlation with batch_size=100 due to hardcoded limit in batch_handler.py (see memory: 'MAX_BATCH_SIZE discovery'). Confirmed not Redis connection issue - monitoring showed only 43/200 connections used (memory: 'Redis connection analysis'). Earlier hypothesis about connection pool exhaustion (memory_id: abc-123) was disproven. Key insight came from comparing 99 vs 100 batch behavior (memory: 'batch threshold testing'). Blocked on: need production access to verify fix. Next: Deploy with MAX_BATCH_SIZE=200 to staging first. Essential memories for handoff: 'MAX_BATCH_SIZE discovery', 'Redis monitoring results', 'Production vs staging comparison'. Ready for handoff to SRE team for deployment." "[Memory System Debugging - From Claude Code perspective] Worked on scoring issues where recall wasn't finding recent memories. Discovered RRF scores (0.005-0.016) were below MCP threshold of 0.05 (memory: 'RRF scoring analysis'). Implemented weighted linear fusion to replace RRF (memory: 'fusion algorithm implementation'). Testing showed immediate improvement (memory: 'fusion testing results'). This builds on earlier investigation about recall failures (memory: 'user report of recall issues'). Critical memories for continuation: 'RRF scoring analysis', 'ADR-023 decision', 'fusion testing results'. Next agent should verify scoring with real queries." "[Context Save/Restore Bug Investigation - 4 hour debugging session with user] Started with user noticing list_contexts returned empty despite saved contexts existing. Investigation revealed two critical bugs: (1) list_contexts was using hybrid search for 'checkpoint' word instead of filtering by memory_type (memory: 'hybrid search misuse discovery'), (2) restore_context hardcoded limit of 10 memories despite contexts having 20+ (memory: 'hardcoded limit bug'). Root cause analysis showed save_context grabs 20 most recent memories regardless of relevance - fundamental design flaw (memory: 'save_context design flaw analysis'). EVIDENCE CHAIN: User reported empty list -> checked DB, contexts exist -> examined list_contexts code -> found hybrid search looking for word 'checkpoint' -> tested /memories endpoint with memory_type filter -> confirmed working -> implemented fix using direct endpoint. INSIGHTS: The narrative description is doing 90% of cognitive handoff work. Memories are supporting evidence, not primary carriers of understanding (memory: 'narrative vs memories insight'). This suggests doubling down on narrative richness rather than perfecting memory selection. CORRECTED UNDERSTANDING: Initially thought memories weren't being returned. Actually they were, just wrong ones - recent memories instead of relevant ones (memory: 'memory selection correction'). CRITICAL MEMORIES: 'hybrid search misuse discovery', 'save_context design flaw analysis', 'narrative vs memories insight', '/memories endpoint test results'. NEXT AGENT: Should implement Phase 2 - semantic search for relevant memories within investigation timeframe. Ready for handoff to any Claude agent for implementation." When referencing memories: - **RELIABLE** — Use memory IDs: "memory_id: abc-123" (direct lookup, always works) - **BEST-EFFORT** — Use descriptive phrases: "see memory: 'Redis connection analysis'" (uses search + substring matching, may not resolve if the memory isn't in top results) - Group related memories: "Essential memories: 'X', 'Y', 'Z'" **Prefer memory_id references** whenever you have the UUID. Semantic phrase references are a convenience that works most of the time, but may silently fail to resolve. The response will tell you how many references resolved so you can retry with UUIDs if needed. Args: name: Name for this context checkpoint description: Detailed cognitive handoff description with memory references ctx: MCP context (automatically provided) Returns: Dict with success status, context_id, and memories included
    Connector