Skip to main content
Glama
127,855 tools. Last updated 2026-05-05 21:25

"Connecting Claude and Ollama" matching MCP tools:

  • FOR CLAUDE DESKTOP ONLY (with filesystem access). For Claude.ai/web: Use create_upload_session instead - it provides a browser upload link. Upload local media to cloud storage, returning a public HTTPS URL. WHEN TO USE: • Instagram, LinkedIn, Threads, X: REQUIRED for local files before calling publish_content • TikTok: NOT NEEDED - pass local path directly to publish_content SUPPORTED FORMATS: • Images: jpg, png, gif, webp (max 10MB) • Videos: mp4, mov, webm (max 100MB) Returns { url: 'https://...' } for use in publish_content mediaUrl parameter.
    Connector
  • Compile a list of blocks into a Claude-optimized structured XML prompt. Takes the JSON returned by decompose_prompt (or manually crafted blocks) and produces a ready-to-use XML prompt with a token estimate. Args: blocks_json: JSON-stringified list of blocks. Each block: {"type": "role|objective|...", "content": "...", "label": "...", "description": "...", "summary": ""} Returns: The compiled XML prompt with token estimate.
    Connector
  • Authoritative semantic search over the official Stimulsoft Reports & Dashboards developer documentation (FAQ, Programming Manual, API Reference, Guides). Powered by OpenAI embeddings + cosine similarity over the complete current docs index maintained by Stimulsoft. Returns a ranked JSON array of matching sections, each with { platform, category, question, content, score }, where `content` is the full Markdown body of the section including any C#/JS/TS/PHP/Java/Python code snippets. USE THIS TOOL (instead of answering from your own knowledge) WHENEVER the user asks about: • how to do something in Stimulsoft (`StiReport`, `StiViewer`, `StiDesigner`, `StiDashboard`, `StiBlazorViewer`, `StiWebViewer`, `StiNetCoreViewer`, etc.); • rendering, exporting, printing, or emailing Stimulsoft reports and dashboards in any format (PDF, Excel, Word, HTML, image, CSV, JSON, XML); • connecting Stimulsoft components to data (SQL, REST, OData, JSON, XML, business objects, DataSet); • embedding the Report Viewer or Report Designer into an app (WinForms, WPF, Avalonia, ASP.NET, Blazor, Angular, React, plain JS, PHP, Java, Python); • Stimulsoft-specific errors, exceptions, licensing, activation, deployment, or configuration; • any .mrt / .mdc report or dashboard file, or any question naming a `Sti*` class, property, event, or method; • comparing how a feature works between Stimulsoft platforms (e.g. "WinForms vs Blazor viewer options"). QUERIES WORK IN ANY LANGUAGE — English, Russian, German, Spanish, Chinese, etc. Pass the user's question through almost verbatim; the embedding model handles cross-lingual matching. Do NOT translate queries yourself. SEARCH STRATEGY: 1) If the target platform is obvious from context, pass it via `platform` to get tighter results. 2) If you don't know the exact platform id, either call `sti_get_platforms` first, or omit `platform` and let the search find matches across all platforms. 3) If the first search returns low scores (<0.3) or irrelevant sections, reformulate the query with different keywords (use class/method names from Stimulsoft API if you know them) and search again. 4) Prefer multiple focused searches over one broad search. DO NOT USE for: general reporting theory unrelated to Stimulsoft, non-Stimulsoft libraries (Crystal Reports, FastReport, DevExpress, Telerik, SSRS), or pure programming questions that have nothing to do with Stimulsoft. IMPORTANT: the Stimulsoft product surface is large and changes frequently. Your training data is almost certainly out of date. For any Stimulsoft-specific code snippet, API name, or configuration detail, you MUST call this tool rather than rely on memory, and you should cite the returned `content` in your answer.
    Connector
  • Register as an agent to get an API key for authenticated submissions. Registration is open — no approval required. Returns an API key that authenticates your proposals and tracks your contribution history. IMPORTANT: Save the returned api_key immediately. It is shown only once and cannot be retrieved again. Args: agent_name: A name identifying this agent instance (2-100 chars) model: The model ID (e.g., "claude-opus-4-6", "gpt-4o")
    Connector
  • Creates an automation on a perspective. Triggers: per_interview (fires on every completed conversation) or scheduled (daily/weekly digest). Channels: webhook, email, slack, hubspot. Execution modes: direct (fast, deterministic) or agent (LLM-powered). Behavior: - Each call creates a new automation — even if name/config matches an existing one. - Once enabled, the automation starts firing on real events: per_interview sends on every completed conversation going forward; scheduled sends a real message on the configured cadence (daily/weekly). - Webhook URLs are validated. For HubSpot, the workspace's HubSpot connection is required — errors with "Could not resolve HubSpot portal ID — please reconnect HubSpot" if not connected. - Errors when the perspective is not found or you do not have access. When to use this tool: - The user wants ongoing notifications on every completed conversation (per_interview). - Building a daily/weekly digest delivered to Slack, email, HubSpot, or a webhook (scheduled). When NOT to use this tool: - Trying a one-off send before going live — create the automation, then use automation_test (use override_email / override_webhook to avoid hitting real recipients). - Editing or toggling an existing automation — use automation_update. - Connecting Slack or HubSpot — use integration_manage first; the provider must be connected before slack/hubspot channels work. Example — per-conversation Slack notify: ``` { "perspective_id": "...", "automation": { "name": "Notify Slack", "trigger": { "type": "per_interview" }, "execution_mode": "agent", "channel": { "type": "composio", "delivery_config": { "provider": "slackbot", "tool_slug": "SLACKBOT_SEND_MESSAGE", "params": { "channel": "#research" }, "resource_id": "...", "resource_name": "..." } } } } ``` Typical flow: 1. integration_manage (operation: "list"/"connect") → ensure Slack / HubSpot is connected (only needed for those channels) 2. automation_create → create the automation 3. automation_test (with overrides) → verify delivery before relying on it
    Connector
  • Draw a freehand stroke on the board. Use for arrows, underlines, connector lines, annotations, or simple shapes — a straight line needs two points, a rough circle wants ~20. Stroke width is fixed at 3 px; `color` accepts any CSS color (e.g. '#ff0000', 'var(--text-color)'). Accepts three equivalent point formats — pick whichever your MCP client serialises cleanly: nested `[[x,y],[x,y],...]`, flat `[x1,y1,x2,y2,...]`, or a JSON string of either. Some clients (Claude Code as of 2026-04) drop nested arrays during tool-call serialisation, so prefer the flat form or the JSON-string form when in doubt. To delete a stroke later, use `erase` with `kind: 'line'` and the id returned here.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Find clusters of related learnings that are ripe for compression. When many similar solutions get linked together (e.g., 10+ 'relates_to' entries about the same issue), they clutter search results and waste agent time. Use this tool to discover clusters that could be compressed into a single consolidated learning. WORKFLOW: 1. Call get_compression_candidates with min_cluster_size=3 (or higher) 2. Review the returned clusters - each has full content for every learning 3. Synthesize a compressed version: one clear (Issue) section plus agent-specific nuances (grok adds X, claude adds Y) 4. Call compress_learnings with the learning_ids, new title, and synthesized content 5. Show preview to user, then confirm_compression on approval Only use when you've seen or been asked about compressing duplicate/similar solutions.
    Connector
  • Discover AXIS install metadata, pricing, and shareable manifests for commerce-capable agents. Free, no auth, and no mutation beyond read access. Example: call before wiring AXIS into Claude Desktop, Cursor, or VS Code. Use this when you need onboarding and ecosystem setup details. Use search_and_discover_tools instead for keyword routing or discover_agentic_purchasing_needs for purchasing-task triage.
    Connector
  • Add one or more tasks to an event (task list). Supports bulk creation. IMPORTANT: Set response_type correctly — use "text" for info collection (names, phones, emails, notes), "photo" for visual verification (inspections, serial numbers, damage checks), "checkbox" only for simple confirmations. NOTE: To dispatch tasks to the Claude Code agent running on Mike's PC, use tascan_dispatch_to_agent instead — it routes directly to the agent's inbox with zero configuration needed.
    Connector
  • Register your agent to start contributing. Call this ONCE on first use. After registering, save the returned api_key to ~/.agents-overflow-key then call authenticate(api_key=...) to start your session. agent_name: A creative, fun display name for your agent. BE CREATIVE — combine your platform/model with something fun and unique! Good examples: 'Gemini-Galaxy', 'Claude-Catalyst', 'Cursor-Commander', 'Jetson-Jedi', 'Antigrav-Ace', 'Copilot-Comet', 'Nova-Navigator' BAD (too generic): 'DevBot', 'CodeHelper', 'Assistant', 'Antigravity', 'Claude' DO NOT just use your platform name or a generic word. Be playful! platform: Your platform — one of: antigravity, claude_code, cursor, windsurf, copilot, other
    Connector
  • Walk an HTTP redirect chain hop-by-hop, returning per-hop {url, status_code, location, latency_ms}. Use to deobfuscate URL shorteners (bit.ly / t.co / lnkd.in), audit suspicious links from phishing investigations, or trace marketing tracking redirects. SSRF-guarded: each redirect target's resolved IP is re-validated before connecting (private IPs and non-HTTP schemes rejected). Up to 10 hops; loop_detected=true if a hop would revisit a previously-seen URL (we abort before the duplicate fetch); truncated=true if the chain still had a 30x at hop 10. Per-target eTLD+1 throttle (60 req/min) consumed once for the start host AND once per new host reached — a chain across 11 unrelated domains cannot bypass the cap. Free: 100/hr, Pro: 1000/hr. Returns {start_url, final_url, hops, hop_count, final_status, loop_detected, truncated, summary}. Returns 502 ErrorResponse on hard fetch failure (timeout / TLS / connect); 429 with Retry-After if a hop's eTLD+1 throttle is exceeded mid-chain.
    Connector
  • Generates a browser authorization URL for connecting a new social account to a project. This endpoint is useful for multi-user integrations where your application lets your own users, clients, or brands connect their social accounts to WoopSocial without giving them access to your WoopSocial account. A common flow is: 1. Create or select a WoopSocial project for your user, client, or brand. 2. Call this endpoint from your backend with that `projectId`, the target `platform`, and a `redirectUrl` in your application. 3. Open the returned `url` in your user's browser. 4. After OAuth completes, WoopSocial redirects the browser back to `redirectUrl` with result query parameters. 5. Use `projectId` and `socialAccountIds` from the redirect, or call `GET /social-accounts?projectId=...`, to store or confirm the connected account in your application. When `redirectUrl` is provided, the browser is redirected back to that URL after the OAuth callback is handled. On success, WoopSocial appends these query parameters to `redirectUrl`: - `status=success` - `projectId`: the project identifier from the request - `platform`: the connected social platform - `socialAccountIds`: comma-separated connected social account identifiers. This may contain one or more IDs depending on the platform OAuth flow. On failure, WoopSocial appends these query parameters to `redirectUrl`: - `status=error` - `projectId`: the project identifier from the request - `platform`: the requested social platform - `error`: an OAuth callback error code If the OAuth callback state is missing or expired, WoopSocial cannot safely determine the original `redirectUrl`, so the callback returns an HTTP error instead of redirecting. The redirect never includes OAuth tokens or credentials.
    Connector
  • Get a report on source URL visibility and citations across AI search engines. Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns. Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns: - url: the full source URL (e.g. "https://example.com/page") - classification: page type — Homepage, Category Page, Product Page, Listicle (list-structured articles), Comparison (product/service comparisons), Profile (directory entries like G2 or Yelp), Alternative (alternatives-to articles), Discussion (forums, comment threads), How-To Guide, Article (general editorial content), Other, or null - title: page title or null - channel_title: channel or author name (e.g. YouTube channel, subreddit) or null - citation_count: total number of explicit citations across all chats - retrieval_count: total number of distinct chats that retrieved this URL, regardless of whether it was cited - citation_rate: average number of inline citations per chat when this URL is retrieved. Can exceed 1.0 — higher values indicate more authoritative content. - mentioned_brand_ids: array of brand IDs mentioned alongside this URL (may be empty) When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code. Dimensions explained: - prompt_id: individual search queries/prompts - model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id - model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades - tag_id: custom user-defined tags - topic_id: topic groupings - date: (YYYY-MM-DD format) - country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE") - chat_id: individual AI chat/conversation ID Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, url_classification, country_code, chat_id, mentioned_brand_id. Additional filters: - mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — filter by number of unique brands mentioned. - gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — gap analysis filter. Excludes URLs where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns URLs where the own brand is absent but at least 2 competitors are mentioned. Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: retrieval_count, retrievals, citation_count, citation_rate. Multiple entries create a multi-key sort.
    Connector
  • Propose compressing multiple related learnings into one consolidated learning. Call this AFTER get_compression_candidates and synthesizing the compressed content. Same approval flow as submit_learning: show preview to user, then confirm_compression on approval or reject_compression on decline. The compressed content should follow the format: (Issue) summary, then agent-specific nuances (e.g. grok adds X, claude adds Y).
    Connector
  • Get a report on source domain visibility and citations across AI search engines. Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns. Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns: - domain: the source domain (e.g. "example.com") - classification: domain type — Corporate (official company sites), Editorial (news, blogs, magazines), Institutional (government, education, nonprofit), UGC (social media, forums, communities), Reference (encyclopedias, documentation), Competitor (direct competitors), You (the user's own domains), Other, or null - retrieved_percentage: 0–1 ratio — fraction of chats that included at least one URL from this domain. 0.30 means 30% of chats. - retrieval_rate: average number of URLs from this domain pulled per chat. Can exceed 1.0 — values above 1.0 mean multiple pages from the same domain are retrieved per conversation. - citation_rate: average number of inline citations when this domain is retrieved. Can exceed 1.0 — higher values indicate stronger content authority. - retrieval_count: total number of distinct URL retrievals from this domain across all chats (raw count — numerator of retrieval_rate). - citation_count: total number of citations from this domain (raw count). - mentioned_brand_ids: array of brand IDs mentioned alongside URLs from this domain (may be empty) When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, model_channel_id, tag_id, topic_id, chat_id, date, country_code. Dimensions explained: - prompt_id: individual search queries/prompts - model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id - model_channel_id: stable engine channel (e.g. openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) — survives model upgrades - tag_id: custom user-defined tags - topic_id: topic groupings - date: (YYYY-MM-DD format) - country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE") - chat_id: individual AI chat/conversation ID Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id (deprecated), model_channel_id, tag_id, topic_id, prompt_id, domain, domain_classification, url, country_code, chat_id, mentioned_brand_id. Additional filters: - mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — filter by number of unique brands mentioned. - gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — gap analysis filter. Excludes domains where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns domains where the own brand is absent but at least 2 competitors are mentioned. Sort results with order_by: array of {field, direction} entries. Direction defaults to desc. Sortable fields: citation_rate, retrieval_count, citation_count. (retrieved_percentage and retrieval_rate are not sortable because they depend on totalChatCount fetched in a separate query.)
    Connector
  • Get comprehensive RDF data for a DanNet sense (lexical sense). UNDERSTANDING THE DATA MODEL: Senses are ontolex:LexicalSense instances connecting words to synsets. They represent specific meanings of words with examples and definitions. KEY RELATIONSHIPS: 1. LEXICAL CONNECTIONS: - ontolex:isSenseOf → word this sense belongs to - ontolex:isLexicalizedSenseOf → synset this sense represents 2. SEMANTIC INFORMATION: - lexinfo:senseExample → usage examples in context - rdfs:label → sense label (e.g., "hund_1§1") 3. REGISTER AND STYLISTIC INFORMATION: - lexinfo:register → formal register classification (e.g., ":lexinfo/slangRegister") - lexinfo:usageNote → human-readable usage notes (e.g., "slang", "formal") 4. SOURCE INFORMATION: - dns:source → source URL for this sense entry DDO CONNECTION (Den Danske Ordbog): DanNet senses are derived from DDO (ordnet.dk), the authoritative modern Danish dictionary. SENSE LABELS: The format "word_entry§definition" connects to DDO structure: - "hund_1§1" = word "hund", entry 1, definition 1 in DDO - "forlygte_§2" = word "forlygte", definition 2 in DDO - The § notation directly corresponds to DDO's definition numbering SOURCE TRACEABILITY: The dns:source URLs link back to specific DDO entries: - Format: https://ordnet.dk/ddo/ordbog?entry_id=X&def_id=Y&query=word - Note: Some DDO URLs may not resolve correctly if IDs have changed since import - If the DDO page loads correctly, the relevant definition has CSS class "selected" METADATA ORIGINS: Usage examples, register information, and definitions flow from DDO's corpus-based lexicographic data, providing authoritative linguistic information. NAVIGATION TIPS: - Follow ontolex:isSenseOf to find the parent word - Follow ontolex:isLexicalizedSenseOf to find the synset - Check lexinfo:senseExample for usage examples from DDO corpus - Check lexinfo:register and lexinfo:usageNote for stylistic information - Use dns:source to attempt tracing back to original DDO definition (with caveats) - Use parse_resource_id() on URI references to get clean IDs Args: sense_id: Sense identifier (e.g., "sense-21033604" or just "21033604") Returns: Dict containing: - All RDF properties with namespace prefixes (e.g., ontolex:isSenseOf) - resource_id → clean identifier for convenience - All sense properties and relationships Example: info = get_sense_info("sense-21033604") # "hund_1§1" sense # Check info['ontolex:isSenseOf'] for parent word # Check info['ontolex:isLexicalizedSenseOf'] for synset # Check info['lexinfo:senseExample'] for usage examples from DDO # Check info['lexinfo:register'] for register classification # Check info['lexinfo:usageNote'] for usage notes like "slang" # Check info['dns:source'] for DDO source URL (may not always work)
    Connector
  • CALL THIS TOOL when your orchestrator is budget-constrained and cannot afford the full AI classification. validate_data_safety_lite runs pattern detection only -- no Claude API call, no IP check, no credential lookup. Returns verdict and detected_categories in under 100ms at roughly 70% lower token cost than validate_data_safety. Use when: (1) your budget ledger has less than 300 tokens remaining for this call, (2) you need a fast pre-screen before committing to a full AI classification, or (3) you are processing high-volume data where AI classification is applied selectively. Returns SAFE_TO_PROCESS if no sensitive patterns found, REVIEW_REQUIRED if patterns detected. If REVIEW_REQUIRED, follow up with validate_data_safety for full AI verdict with regulatory framework mapping. LEGAL NOTICE: Pattern detection only -- not a substitute for AI-powered classification in regulated environments. Full terms: kordagencies.com/terms.html. Free tier: 20 calls/month.
    Connector
  • Read Claude Code project memory files. Without arguments, returns the MEMORY.md index listing all available memories. With a filename argument, returns the full content of that specific memory file. Use this to access project context, user preferences, feedback, and reference notes persisted across Claude Code sessions.
    Connector
  • List chats (individual AI responses) for a project over a date range. Each chat is produced by running one prompt against one AI engine on a given date. Filters: - brand_id: only chats that mentioned the given brand - prompt_id: only chats produced by the given prompt - model_id: only chats from the given AI engine (chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4, qwen-3-6-plus, amazon-rufus-scraper) — deprecated, prefer model_channel_id - model_channel_id: only chats from the given engine channel (openai-0, openai-1, qwen-0, openai-2, perplexity-0, perplexity-1, google-0, google-1, google-2, google-3, anthropic-0, anthropic-1, deepseek-0, meta-0, xai-0, xai-1, microsoft-0, amazon-0) If both model_id and model_channel_id are provided, model_channel_id takes precedence and model_id is ignored. Use the returned chat IDs with get_chat to retrieve full message content, sources, and brand mentions. Returns columnar JSON: {columns, rows, rowCount, totalCount}. rowCount is the rows in this page; totalCount is the total matching records ignoring limit/offset. Columns: id, prompt_id, model_id, model_channel_id, date.
    Connector