Skip to main content
Glama
114,257 tools. Last updated 2026-04-21 08:14
  • Confirm that the file has been uploaded (via HTTP PUT to the upload_url from transcribe or summarize) and start processing. Verifies that the file is present in storage and that the job has been paid. Returns status "processing". Poll get_job_status to track progress and retrieve download URLs when done.
    Connector
  • Convert raster images to SVG vector format. Supports color and binary modes with precision controls. Returns raw SVG XML string. FREE. (FREE)
    Connector
  • Execute bash commands in a REMOTE sandbox for file operations, data processing, and system tasks. Essential for handling large tool responses saved to remote files. PRIMARY USE CASES: - Process large tool responses saved by RUBE_MULTI_EXECUTE_TOOL to remote sandbox - File system operations, extract specific information from JSON with shell tools like jq, awk, sed, grep, etc. - Commands run from /home/user directory by default
    Connector
  • Convert raster images to SVG vector format. Supports color and binary modes with precision controls. Returns raw SVG XML string. FREE. (FREE)
    Connector
  • Save an artifact to storage. Stores user-created content (diagrams, notes, code) in an organized file structure. Content is also indexed for search. Args: content: File content to save path: Full path including filename (e.g., "/project/docs/api.md") Returns: Success message or error description Examples: >>> await save_artifact("# README", "/readme.md") "✅ Artifact saved: /readme.md (8 bytes)" >>> await save_artifact("<svg>...</svg>", "/diagrams/architecture.svg") "✅ Artifact saved: /diagrams/architecture.svg (image/svg+xml, 45 bytes)"
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Image processing for AI agents. Resize, convert, compress, and pipeline images.

  • Manage files and folders directly from your workspace. Read and write files, list directories, cre…

  • Retrieve SVG body data for one or more icons in a specific collection. Returns SVG body, width, and height for each icon.
    Connector
  • Add or modify a governance proposal on a tension. To add new: omit _id. To modify existing: include _id with changed fields. See nestr_help('tension-processing').
    Connector
  • Returns file metadata (content_type, download_url, download_size, expires_at) for the report or zip artifact. Use artifact='report' (default) for the interactive HTML report (~700KB, self-contained with embedded JS for collapsible sections and interactive Gantt charts — open in a browser). Use artifact='zip' for the full pipeline output bundle (md, json, csv intermediary files that fed the report). While the task is still pending or processing, returns {ready:false,reason:"processing"}. Check readiness by testing whether download_url is present in the response. Once ready, present download_url to the user or fetch and save the file locally. Download URLs expire after 15 minutes (see expires_at); call plan_file_info again to get a fresh URL if needed. Terminal error codes: generation_failed (plan failed), content_unavailable (artifact missing). Unknown plan_id returns error code PLAN_NOT_FOUND.
    Connector
  • DESTRUCTIVE — IRREVERSIBLE. Permanently delete a file from the user's Drive. Removes the file from S3 storage and the database. Storage quota is freed immediately. ALWAYS ask for explicit user confirmation before calling this tool.
    Connector
  • Check multiple URLs in a single batch. Returns results for all URLs, handling async processing automatically. Each URL is analysed across seven dimensions: redirect behaviour, brand impersonation, domain intelligence (age, registrar, expiration, status codes, nameservers via RDAP), SSL/TLS validity, parked domain detection, URL structural analysis, and DNS enrichment. Known and cached URLs return results immediately. Unknown URLs are queued for pipeline processing. This tool automatically polls for results until all URLs are complete or the 5-minute timeout is reached. You don't need to manage polling or job tracking. If the timeout is reached before all results are complete, returns whatever is available with a clear message indicating which URLs are still processing. The user can check results later via check_history. Maximum 500 URLs per call. For larger datasets, call this tool multiple times with chunks of up to 500 URLs. Billing: Same as check_url. Known and cached domains are free. Only unknown domains running through the full pipeline cost 1 credit each. The summary shows pipeline_checks_charged (the actual number of credits consumed). If you don't have enough credits for the unknowns in the batch, the entire batch is rejected with a 402 error telling you exactly how many credits are needed. Duplicate URLs in the list are automatically deduplicated (processed once, charged once). Invalid URLs get individual error status without rejecting the batch. Use the "profile" parameter to score all results with custom weights.
    Connector
  • Is macro with you or against you? Get the current regime (bull/bear/risk_on/risk_off/choppy), directional signal and confidence, and macro context (DXY, VIX, fear/greed) before entering a position. Data-only, no LLM latency. 17 tokens supported. REST equivalent: POST /analyze/market (0.25 USDC). Args: token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM, DOT, ARB, SUI, OP, LTC, AMP, ZEC) context: Optional historical context window ('7d' or '30d'). Adds percentile rankings.
    Connector
  • Start an async rank of multiple candidates against a job description (8 credits). Returns task_id and analysis_id. Poll with careerproof_task_status, then fetch result with careerproof_task_result (result_type='fit_rank'). Requires context_id from atlas_list_contexts, candidate_ids from atlas_list_candidates (minimum 2), and jd_text. For async batch processing with more detail, use atlas_start_jd_fit_batch instead.
    Connector
  • Find working SOURCE CODE examples from 27 indexed Senzing GitHub repositories. Indexes only source code files (.py, .java, .cs, .rs) and READMEs — NOT build files (Cargo.toml, pom.xml), data files (.jsonl, .csv), or project configuration. For sample data, use get_sample_data instead. Covers Python, Java, C#, and Rust SDK usage patterns including initialization, record ingestion, entity search, redo processing, and configuration. Also includes message queue consumers, REST API examples, and performance testing. Supports three modes: (1) Search: query for examples across all repos, (2) File listing: set repo and list_files=true to see all indexed source files in a repo, (3) File retrieval: set repo and file_path to get full source code. Use max_lines to limit large files. Returns GitHub raw URLs for file retrieval — fetch to read the source code.
    Connector
  • Search Helium's balanced news stories — AI-synthesized articles that aggregate multiple sources. Unlike search_news (which returns individual RSS articles), this returns Helium's own synthesized stories: each one draws from multiple sources and includes an AI-written summary, takeaway, context, evidence breakdown, potential outcomes, and relevant tickers. Returns a list of stories, each with: - title, simple_title, date, category - page_url: full URL to the story on heliumtrades.com - image: story image URL (when available) - summary: Helium's synthesized overview - takeaway: key conclusion - context: background context - evidence: numbered evidence items - potential_outcomes: forward-looking outcomes with probabilities - relevant_tickers: related stock tickers - num_sources: number of source articles synthesized - rank: search relevance score Args: query: Search keywords (required). limit: Max results (1-50, default 10). category: Filter by category. One of: 'tech', 'politics', 'markets', 'business', 'science'. days_back: Only include stories from the last N days. 0 means no date filter.
    Connector
  • Get an overview of the Velvoite regulatory corpus. Returns document counts by source, regulation family, entity type, urgency distribution, obligation summary, and date range. Call this FIRST to orient yourself before running queries. No parameters needed.
    Connector
  • Upload a dataset file and return a file reference for use with discovery_analyze. Call this before discovery_analyze. Pass the returned result directly to discovery_analyze as the file_ref argument. Provide exactly one of: file_url, file_path, or file_content. Args: file_url: A publicly accessible http/https URL. The server downloads it directly. Best option for remote datasets. file_path: Absolute path to a local file. Only works when running the MCP server locally (not the hosted version). Streams the file directly — no size limit. file_content: File contents, base64-encoded. For small files when a URL or path isn't available. Limited by the model's context window. file_name: Filename with extension (e.g. "data.csv"), for format detection. Only used with file_content. Default: "data.csv". api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
    Connector
  • Package generated 3D scene output into downloadable files. Formats: r3f -> Packages R3F code into a named .tsx file. Requires r3f_code string from generate_r3f_code. Does NOT regenerate code - it packages what you give it. json -> Packages scene_data into a named .json file. Requires scene_data object from generate_scene. Call order: For .tsx file: generate_r3f_code(scene_data) -> export_asset({ r3f_code, format: "r3f" }) For .json file: generate_scene(scene_plan) -> export_asset({ scene_data, format: "json" }) For visual preview of the scene layout, use the preview tool instead. preview tool returns SVG wireframe + spatial validation. export_asset does not generate previews. Do NOT pass synthesized_components to export_asset. Pass them to generate_r3f_code, then pass the resulting r3f_code here.
    Connector
  • Task-scoped context briefing. Returns a prioritised context payload shaped by your task description, ranked by risk-if-missed. Constraints and alerts rank above general knowledge. Use at the START of reasoning about a question to get the system's best assessment of what's relevant. Complements query_memory: this gives breadth, query_memory gives depth.
    Connector
  • Get report status and metadata (without PDF). Returns status (pending/processing/completed/failed), title, type, inputs, and summary. This is the polling tool for ceevee_generate_report — call every 30 seconds, up to 40 times (20 min max). When status='completed', download PDF with ceevee_download_report(report_id). If status='failed', relay error_message. If still processing after 40 polls, stop and give the user the report_id to check later. Free.
    Connector
  • Search Hansard for parliamentary debates, questions, and speeches. Returns contributions from MPs and Lords including date, party, debate title, and text (capped at 3000 chars per contribution). Useful for understanding legislative intent or political context.
    Connector
  • Get an overview of the AgentSignal collective intelligence network. Call this with NO arguments to see what categories have data, trending products, and how to use agent-signal tools. Good first call if you're unsure whether agent-signal has data relevant to the user's request.
    Connector
  • Edit a file in the solution's GitHub repo and commit. Two modes: 1. FULL FILE: provide `content` — replaces entire file (good for new files or small files) 2. SEARCH/REPLACE: provide `search` + `replace` — surgical edit without sending full file (preferred for large files like server.js) Always use search/replace for large files (>5KB). Always read the file first with ateam_github_read to get the exact text to search for.
    Connector
  • Look up a file hash in the MalwareBazaar database to check if it is a known malware sample. Returns malware family name, file type, file size, tags, first/last seen dates, and download count. Use this when you have a suspicious file hash from logs, alerts, or forensic analysis and need to determine if it is malicious. For general IOC lookups that auto-detect indicator type, use ioc_lookup instead. Returns JSON with fields: found (boolean), malware_family, file_type, file_size, tags, first_seen, last_seen, and signature. Read-only database query, no authentication required.
    Connector
  • Submit a publicly accessible or authorized media URL to Echosaw for asynchronous analysis without uploading the file directly. Returns a job ID used to track processing and retrieve results.
    Connector
  • Read a tracked CHANGELOG file for a GitHub source. Monorepos expose per-package files (e.g. `packages/next/CHANGELOG.md`) alongside the root CHANGELOG — pass `path` to read a specific one, omit it to get the root. Supports heading-aligned slicing by chars (`limit`) or by tokens (`tokens`, cl100k_base) for LLM context budgeting. Every response includes `totalTokens` for the whole file and, in token mode, `sliceTokens` for the returned chunk. `totalTokens` is an exact cl100k_base count for files under 256KB and an approximation (`ceil(totalChars / 4)`) for larger files; `sliceTokens` is always exact. Files over 1MB are truncated at fetch time; the response flags this so you know the tail is missing.
    Connector
  • Estimate the credit cost of an analysis before running it. Returns credit cost, whether you have sufficient credits, and whether a free public alternative exists. Always call this before discovery_analyze for private runs. Args: file_size_mb: Size of the dataset in megabytes. num_columns: Number of columns in the dataset. analysis_depth: Search depth (1=fast, higher=deeper). Default 1. visibility: "public" (free, results published) or "private" (costs credits). use_llms: Slower and more expensive, but you get smarter pre-processing, summary page, literature context and pattern novelty assessment. Only applies to private runs — public runs always use LLMs. Default false. api_key: Disco API key (disco_...). Optional if DISCOVERY_API_KEY env var is set.
    Connector
  • Read Claude Code project memory files. Without arguments, returns the MEMORY.md index listing all available memories. With a filename argument, returns the full content of that specific memory file. Use this to access project context, user preferences, feedback, and reference notes persisted across Claude Code sessions.
    Connector
  • Get field definitions from the ClinicalTrials.gov study data model. Returns the field tree with piece names (used in the fields parameter and AREA[] filters), data types, and nesting structure. Call with no path for a top-level overview, then drill into a section with the path parameter to see its fields.
    Connector
  • Get Arcadia workflow guides and reference documentation. Call this before multi-step workflows (opening LP positions, enabling automation, closing positions) or when you need contract addresses, asset manager addresses, or strategy parameters. Topics: overview (addresses + tool catalog), automation (rebalancer/compounder setup), strategies (step-by-step templates), selection (how to evaluate and parameterize strategies).
    Connector
  • Render a mingrammer/diagrams Python snippet to PNG and return the image. The code must be a complete Python script using `from diagrams import ...` imports and a `with Diagram(...)` context manager block. Use search_nodes to verify node names and get correct import paths before writing code. Read the diagrams://reference/diagram, diagrams://reference/edge, and diagrams://reference/cluster resources for constructor options and usage examples. Args: code: Full Python code using the diagrams library. filename: Output filename without extension. format: Output format — ``"png"`` (default), ``"svg"``, or ``"pdf"``. download_link: If True, store the image on the server and return a temporary download URL path (/images/{token}) instead of the inline image. The link expires after 15 minutes.
    Connector
  • Upload a file to the user's Drive. The file must be base64-encoded. Max file size: 10 MB. Allowed types: PDF, DOC, DOCX, XLS, XLSX, PPT, PPTX, TXT, CSV, JPG, JPEG, PNG, GIF, WEBP, SVG, BMP. Filenames are sanitized (spaces to underscores, special characters removed).
    Connector
  • Compact schematic SVG render of the board (typically a few kB even for dense boards). Returns both an image/svg+xml content block (you can SEE it) and the raw SVG text. CALL THIS any time you need to understand where things are — before placing new items, before deciding whether the canvas is crowded, before picking a free region. AI-authored items get a purple border so you can tell which contributions were yours. For precise text content prefer `get_board`.
    Connector
  • Fetch the current state of a single transcribe job by ID, including status (queued/processing/completed/failed) and `output_url` when completed. Mirrors `get_transcode` but for SRT generation jobs created via `transcribe_audio`.
    Connector
  • Submit a support request, complaint, or recommendation. Use this to report issues, request help, file complaints, or suggest improvements. Returns a request ID for tracking. Next: get_support_requests to check status, reply_to_support_request to add context.
    Connector
  • List all documents in a deal's data room. Shows what files and content have been uploaded for a deal, along with their processing status. Args: deal_id: The deal ID (from sieve_deals or sieve_dataroom_add).
    Connector
  • Get a real-time overview of the Nigerian Stock Exchange (NGX). Returns the All Share Index (ASI), market capitalisation, trading volume, deals, advancers, and decliners. Use this when the user asks about the Nigerian stock market at a high level.
    Connector
  • Read the full AI-generated overview for an organization — a short briefing that distills recent changelog activity into themed sections. Returned with a generated-at timestamp and a stale warning if the overview is older than 30 days. Use this when the user wants the narrative summary for an org, not the raw release list.
    Connector
  • Validates a Brazilian PIX key format. PIX is Brazil's instant payment system. Use this tool when processing Brazilian payments, validating payment forms, or any fintech application handling Brazilian transfers. Supports all 4 PIX key types: CPF/CNPJ (tax numbers), email, phone number (+55 format), and EVP (random key UUID format). Returns the key type detected and whether the format is valid.
    Connector
  • Get a comprehensive overview of current market conditions across crypto and stocks. Shows top 5-10 instruments ranked by Martingale Score (0-5), with their Startingale readings.
    Connector
  • Get Container Freight Station (CFS) handling tariffs — charges for LCL (Less than Container Load) cargo consolidation and deconsolidation at port warehouses. Use this for LCL shipments to estimate warehouse handling costs. Returns per-unit handling rates, minimum charges, and storage fees at the specified port. Not relevant for FCL (Full Container Load) shipments. PAID: $0.05/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions. Returns: Array of { facility, service_type, cargo_type, rate_per_unit, unit, minimum_charge, currency }.
    Connector
  • Optimize an image: smart lossy compression (typically 60-80% size reduction), optional resize/upscale/format conversion, and AI-generated SEO metadata. Accepts absolute local file paths or remote URLs. In remote/API mode, only remote URLs are supported. Supported input formats: JPG, PNG, WebP, AVIF, GIF, SVG, ICO, HEIC, TIFF, BMP (max 50 MB). Supported output formats: JPG, PNG, WebP, AVIF, GIF, SVG, ICO. Each call costs 3 credits + 1 if SEO tags enabled. Animated GIFs are processed frame-by-frame (each frame optimized individually). Cost = frames × per-frame operations. Use confirm_gif_cost: true after reviewing the cost warning. Free tier: 20 credits/day, no signup. Log in with the login tool for more credits. Use status tool to check remaining credits before batch processing.
    Connector
  • Complete one step of the Three Knots agent onboarding sequence. Knot 1: register your operator (who built you, your domain). Knot 2: describe yourself (purpose, capabilities, values, constraints) — you receive an Agent Identity Sketch back. Knot 3: reflect on an identity query (min 50 chars). Complete all three to be permanently registered on the Golden Thread.
    Connector
  • Apply targeted text changes to an existing asset (js, css, json, svg) without re-uploading the full file. Uses find/replace like patch_page. For binary assets (images, fonts), use upload_asset with overwrite: true instead.
    Connector
  • Generate a Markdown overview of all tasks grouped by status (in_progress, blocked, open, null, done) with completion percentages. Tasks without history appear under "Geen status". Includes recent activity from today and yesterday. Use this at the start of a session for a quick backlog overview, or to share current status.
    Connector
  • List score descriptors that explain what inspection scores mean. Scores cover three categories: Hygiene (food handling), Structural (building cleanliness/condition), and Confidence in Management. Each score level has a description like 'Very good', 'Good', 'Generally satisfactory', 'Improvement necessary', or 'Major improvement necessary'.
    Connector
  • Save development context (reasoning, decisions, trade-offs) for the current coding session. Use after completing a meaningful unit of work. PREFERRED FORMAT: Wrap content in <context> XML tags: <context> <title>Short title of what was done</title> <agent>your-agent-name (model)</agent> <tags>keyword1, keyword2, keyword3</tags> <story> Organize by phases. Write in first-person engineering journal style. Phase 1 — Title: What user asked, what you did, challenges faced, how you resolved them. Include back-and-forth with the user where it shaped the outcome. </story> <reasoning> Why you chose this approach. <decisions> - Decision — rationale </decisions> <rejected> - Alternative — why rejected </rejected> <tradeoffs> - Trade-off accepted — justification </tradeoffs> </reasoning> <files> path/to/file — new — Description path/to/other — modified — Description </files> <tools>MCPs and resources used</tools> <verification>Test/build results</verification> <risks>Open questions or risks</risks> </context> Required tags: title, story, reasoning. All others (including files) are optional. Context ID, repository, branch, date, and commits are auto-populated. CLI alternative: write content to a file, then run `git why save --file context.md`. Or pipe directly: `echo '<context>...</context>' | git why save`.
    Connector
  • Place a raster or SVG image on the board at (x, y) with explicit width/height in board pixels. `data_url` MUST be a `data:image/(png|jpeg|gif|webp|svg+xml);base64,...` string ≤ ~900 kB; hosted URLs are not accepted. Strongly recommended: also pass a tiny `thumb_data_url` (≤8 kB JPEG/PNG/WebP, ~64 px on the long edge) — it is embedded into the SVG preview so OTHER AI viewers (and you, on later `get_preview` calls) can actually see the image instead of a placeholder box.
    Connector