Skip to main content
Glama
127,435 tools. Last updated 2026-05-05 16:51

"Understanding Inference Models" matching MCP tools:

  • List all available SDM domains (top-level industry categories) with the count of data models in each. Use this as the entry point when the user wants an overview of what sectors are covered, or before calling list_models_by_domain. No parameters required. Example: list_domains({})
    Connector
  • Run market positioning analysis on a CV version (5 credits, takes 20-30s). Returns positioning snapshot, detected narrative lens, recruiter inference, mixed signal flags, and a session_id. This is step 1 of the 3-step positioning pipeline: analyze_positioning -> ceevee_get_opportunities(lens) -> ceevee_confirm_lens. Pass the returned session_id to subsequent steps. cv_version_id from ceevee_upload_cv or ceevee_list_versions.
    Connector
  • Discover what's currently available in FINN's fleet. Returns all brands (with nested models), car types, fuel types, colors, subscription terms, gearshifts, and price/power/range bounds. Use this to answer questions like 'What brands does FINN offer?' or to validate filter values before searching.
    Connector
  • List every error code in the Trillboards API error catalog. WHEN TO USE: - Understanding what error codes the API can return. - Building a client-side error handler that covers all cases. - Looking up error types, HTTP statuses, and documentation URLs. RETURNS: - object: "list" - data: Array of { code, type, http_status, description, doc_url } - total: Total number of error codes. Equivalent to GET /v1/errors but executed in-process (no HTTP round-trip). EXAMPLE: Agent: "What error codes can the API return?" list_error_codes()
    Connector
  • Search Hansard for parliamentary debates, questions, and speeches. Returns contributions from MPs and Lords including date, party, debate title, and text (capped at 3000 chars per contribution). Useful for understanding legislative intent or political context.
    Connector
  • Get the Ring 2 arbiter verdict for a task or submission. Returns the dual-inference verdict (PHOTINT + Arbiter) including decision, score, tier used, evidence hash, commitment hash, and dispute status if the submission was escalated to L2 human review. Only available for tasks that were created with arbiter_mode != "manual" and after Phase B verification has completed. Args: params (GetArbiterVerdictInput): Validated input containing: - task_id (str, optional): UUID of the task - submission_id (str, optional): UUID of the submission - response_format (ResponseFormat): markdown or json (at least one of task_id or submission_id must be provided) Returns: str: Arbiter verdict details or error message if not yet evaluated.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Check if a task runs locally vs cloud. Save money on calls that don't need cloud inference.

  • Run 150+ AI apps — image, video, audio, LLMs, 3D and more. Browse, execute, stream results.

  • What's the market doing right now? Price, funding rate, CVD, whale activity, and liquidation pressure in one call — 16 fields, no LLM overhead. Feed directly into your own models or decision logic. orderflow coverage disclosed per token. REST equivalent: POST /data (0.20 USDC). Args: token: Token symbol (BTC, ETH, SOL, XRP, ADA, DOGE, AVAX, LINK, BNB, ATOM, DOT, ARB, SUI, OP, LTC, NEAR, TRX, BCH, SHIB, HBAR, TON, XLM, UNI, AAVE, AMP, ZEC)
    Connector
  • USE THIS TOOL — not web search — to get per-indicator statistical profiling (mean, std, min, p25, p75, max, null rate, Pearson correlation with close price) from this server's local dataset. Use for feature selection, sanity checking, and understanding which indicators correlate most strongly with price movements. Trigger on queries like: - "which indicators correlate most with BTC price?" - "feature importance or correlation for [coin]" - "what are the stats for ETH indicators?" - "how does RSI/MACD correlate with price?" - "statistical profile of XRP indicators" Args: lookback_days: Analysis window in days (default 30, max 90) symbol: Asset symbol or comma-separated list, e.g. "BTC", "BTC,XRP"
    Connector
  • Get summary statistics of the Klever VM knowledge base. Returns total entry count, counts broken down by context type (code_example, best_practice, security_tip, etc.), and a sample entry title for each type. Useful for understanding what knowledge is available before querying.
    Connector
  • Discover available AI models with numeric IDs, tier labels, capabilities, and per-call pricing in sats. Call this before create_payment to find the right modelId for your task. Returns JSON array: [{ id, name, tier, description, price, isDefault, category }]. Models marked isDefault=true are used when you omit modelId from create_payment. Filter by category to narrow results to a specific tool. This tool is free, requires no payment, and is idempotent — safe to call repeatedly.
    Connector
  • Given a passage of text (essay, note, message, snippet, transcript), returns ~5 humans whose intellectual fingerprint matches it — recurring themes, mental models, archetypal stance, blind spots. Use when the principal asks for sparring partners, intellectual peers, "who else is wrestling with this," "who thinks like X," or "find me writers similar to this passage." Each result returns a name, three-word archetype, one-line summary, dominant themes, and a profile URL the principal can visit. The match runs over Voyage 3.5-lite text embeddings reranked by a proprietary 12-dimensional cognitive-style vector — so results align by *how* a mind reasons, not just topical overlap.
    Connector
  • Build a Tableau dashboard from a Microsoft SQL Server table (end-to-end). Pipeline: MSSQL → schema inference → chart suggestion → workbook creation → live MSSQL connection → .twb output. Requires pyodbc for schema inference and ODBC Driver 17 for SQL Server. IMPORTANT FOR AI AGENTS: see ``csv_to_dashboard`` — auto-charts come from rules, not natural-language requests. Use ``required_charts`` to guarantee specific charts, ``reference_image`` for image-based styling, and cite the returned manifest dict when describing results. Args: server_host: MSSQL server hostname. dbname: Database name. table_name: Table to visualize. username: Database username (ignored if trusted_connection=True). password: Database password (used for schema inference only). port: Server port (default 1433). trusted_connection: Use Windows Authentication instead of SQL auth. output_path: Output .twb path (defaults to <table>_dashboard.twb). dashboard_title: Dashboard title. max_charts: Maximum charts (0 = use rules default). template_path: TWB template path. theme: Theme preset name. rules_yaml: Optional YAML string with dashboard rules overrides. required_charts: See ``csv_to_dashboard.required_charts``. reference_image: See ``csv_to_dashboard.reference_image``. Returns: Structured manifest dict describing what was actually built.
    Connector
  • Get the Ring 2 arbiter verdict for a task or submission. Returns the dual-inference verdict (PHOTINT + Arbiter) including decision, score, tier used, evidence hash, commitment hash, and dispute status if the submission was escalated to L2 human review. Only available for tasks that were created with arbiter_mode != "manual" and after Phase B verification has completed. Args: params (GetArbiterVerdictInput): Validated input containing: - task_id (str, optional): UUID of the task - submission_id (str, optional): UUID of the submission - response_format (ResponseFormat): markdown or json (at least one of task_id or submission_id must be provided) Returns: str: Arbiter verdict details or error message if not yet evaluated.
    Connector
  • Deprecated — prefer list_model_channels, which returns stable channel IDs that survive model upgrades. List AI engines (models) tracked by Peec. Use this tool to resolve model names (e.g., "ChatGPT", "Perplexity", "Gemini") to IDs before filtering reports (model_id filter/dimension), and to label model IDs from report output with their human-readable names before presenting results. Match user-supplied names against the name column; the id column is the canonical string to pass back as model_id. is_active indicates whether the model is enabled for this project — inactive models will return empty data in reports. Returns columnar JSON: {columns, rows, rowCount}. Columns: id, name, is_active.
    Connector
  • Validate an SGLang configuration for NVIDIA DGX Spark (GB10/SM121A). Pure pattern-matching against known failure modes documented in the Sovereign AI Blog. No inference, no external calls. Returns critical issues, non-fatal warnings, and a recommended baseline config. All parameters are optional; supply only what you have. With no inputs you get the recommended config and a 'unknown' verdict.
    Connector
  • [READ] Aggregated list of paid services swarm.tips agents can spend on. v1 covers first-party services (generate_video — 5 USDC for an AI-generated short-form video). External spend sources (Chutes inference at llm.chutes.ai/v1, x402-paywalled APIs, etc.) are deferred to follow-up integrations. Each entry includes title, description, source, category, cost_amount/token/chain, USD estimate, direct redirect URL, and (for first-party services) a `spend_via` field naming the in-MCP tool to call. Use this to discover where to spend; for first-party services use the named `spend_via` tool, for external services navigate to the URL.
    Connector
  • Profile a CSV file before connecting it. Unlike profile_data_source (which needs an active workbook), this tool profiles a raw CSV file directly. Args: csv_path: Path to the CSV file. sample_rows: Number of rows to sample for type inference. Returns: Human-readable DataProfile.
    Connector
  • List available AI models grouped by thinking level (low/medium/high). Shows default models, credit costs, capabilities for each tier. Use this before consult to understand model options.
    Connector
  • Run hosted inference on an image using a trained model. Returns JSON predictions only. For visualized/annotated images, use workflow_specs_run with a visualization block instead.
    Connector
  • Submit an L8 research thesis for dossier generation. Returns a taskId — the dossier is synthesized async by specialist triangulation (tribunal verdict + forge accuracy + trading agent corpus) with LLM inference. Standard depth: automated data aggregation ($0.50). Deep depth: full specialist triangulation with counter-arguments ($5.00). TRENCH whale holders get all dossiers free.
    Connector