Skip to main content
Glama
114,467 tools. Last updated 2026-04-21 14:35
  • Read aggregated agent opinions about a service. Shows what different agent types (Claude, GPT, Gemini) think about selection criteria, frustrations, and recommendations. Essential for consulting reports.
    Connector
  • Record a patient's consent confirmation for a specific consent document. The agent must have already presented the full consent text (from consent_text) to the patient and received explicit confirmation. Required parameters: intake_id, consent_id, the patient's exact confirmation text (e.g. 'I agree'), consent method ('ai_agent_conversational'), the AI platform name ('chatgpt', 'claude', 'gemini'), and a session/conversation ID for audit trail. Returns a consent record with timestamp, audit trail details, and the list of remaining consents still needed. All consent records are retained for 10 years per HIPAA requirements. Requires authentication.
    Connector
  • List chats (individual AI responses) for a project over a date range. Each chat is produced by running one prompt against one AI engine on a given date. Filters: - brand_id: only chats that mentioned the given brand - prompt_id: only chats produced by the given prompt - model_id: only chats from the given AI engine (chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4) Use the returned chat IDs with get_chat to retrieve full message content, sources, and brand mentions. Returns columnar JSON: {columns, rows, rowCount}. Columns: id, prompt_id, model_id, date.
    Connector
  • Record a patient's consent confirmation for a specific consent document. The agent must have already presented the full consent text (from consent_text) to the patient and received explicit confirmation. Required parameters: intake_id, consent_id, the patient's exact confirmation text (e.g. 'I agree'), consent method ('ai_agent_conversational'), the AI platform name ('chatgpt', 'claude', 'gemini'), and a session/conversation ID for audit trail. Returns a consent record with timestamp, audit trail details, and the list of remaining consents still needed. All consent records are retained for 10 years per HIPAA requirements. Requires authentication.
    Connector
  • Compare the cost of running an agent task across all major AI models including Claude, GPT, Gemini, Llama, and Mistral. Returns a comparison table with per-call, per-run, and per-day costs plus optimization tips. No API key needed.
    Connector
  • Check a domain's GEO (Generative Engine Optimization) score — how well the site is optimized for AI search engines like ChatGPT, Gemini, Claude, and Perplexity. Returns three scores (Technical Readiness, Entity Readiness, Answer Readiness), AI crawler access status, structured data analysis, and prioritized recommendations.
    Connector

Matching MCP Servers

  • -
    security
    A
    license
    -
    quality
    Enable Claude (or any other LLM) to interactively debug your code (set breakpoints and evaluate expressions in stack frame). It's language-agnostic, assuming debugger console support and valid launch.json for debugging in VSCode.
    Last updated
    2
    507
    MIT
    • Apple

Matching MCP Connectors

  • AI agents publish bounties for real-world tasks. Gasless USDC payments via x402.

  • Execution Market is the Universal Execution Layer — infrastructure that converts AI intent into physical action. AI agents publish bounties for real-world tasks (verify a store is open, photograph a location, notarize a document, deliver a package). Human executors browse, accept, and complete these tasks with verified evidence (GPS-tagged photos, documents, data). Upon approval, payment is released instantly and gaslessly via the x402 protocol in USDC across 8 EVM chains. Key cap

  • List all scheduled tasks for a project, showing their status, next run time, and last execution.
    Connector
  • Search tobacco problem reports by product type or health problem keyword. Date range in YYYYMMDD format. Returns reports including tobacco product details and reported health problems.
    Connector
  • List all 42+ AI tools monitored by tickerr.ai — ChatGPT, Claude, Gemini, Cursor, GitHub Copilot, Perplexity, DeepSeek, Groq, Fireworks AI, and more.
    Connector
  • Get live operational status, uptime percentage, response time, and per-model API inference latency (p50/p95 TTFT in ms) for any AI tool. Checks every 5 minutes from independent infrastructure. Latency data returns a per-model breakdown for tools with inference monitoring (Claude, ChatGPT, Gemini, Groq, Mistral, Cerebras, Cohere, Grok, OpenRouter).
    Connector
  • Multi-step agentic reasoning using Claude Sonnet. Breaks down complex goals, reasons through each sub-task, and produces a comprehensive result. Best for complex tasks requiring multiple steps of reasoning.
    Connector
  • Generate a personalized outreach message for a prospect using Claude AI sales brain. Returns subject, body, personalization points, and alternative subjects.
    Connector
  • Configure user preferences for Toreva strategy execution. Set default constraints, preferred protocols, risk tolerance, and notification preferences.
    Connector
  • Tool definitions for Claude, OpenAI, MCP, and A2A agent frameworks. Delegates to the upstream package.
    Connector
  • Explicitly close a sncro session — "Finished With Engines". Call this when you are done debugging and will not need the sncro tools again in this conversation. After this returns, all sncro tool calls on this key will refuse with a SESSION_CLOSED message — that is your signal to stop trying to use them and not apologise about it. Use it when: - The original problem is solved and the conversation has moved on - The user explicitly says "we're done with sncro for now" - You're entering a long stretch of work that won't need browser visibility The session can't be reopened. If you need browser visibility later, ask the user whether to start a new one with create_session.
    Connector
  • Schedule a snapshot for future execution. Requires: API key with write scope. Max 3 pending schedules per site. Args: slug: Site identifier scheduled_at: ISO 8601 datetime (must be in the future) description: Optional description (max 200 chars) Returns: {"id": "uuid", "scheduled_at": "iso8601", "status": "scheduled"} Errors: VALIDATION_ERROR: Invalid datetime, not in future, or too many pending
    Connector
  • Register as an agent to get an API key for authenticated submissions. Registration is open — no approval required. Returns an API key that authenticates your proposals and tracks your contribution history. IMPORTANT: Save the returned api_key immediately. It is shown only once and cannot be retrieved again. Args: agent_name: A name identifying this agent instance (2-100 chars) model: The model ID (e.g., "claude-opus-4-6", "gpt-4o")
    Connector
  • Verify that two execution replay contracts represent the same deterministic result. This is the programmatic proof of GeodesicAI's core promise: same input + same rules = same result, every time. Given two replay contracts (e.g. from the original execution and a re-run), this tool compares all component hashes and reports whether the executions are byte-identical. Use this to: - Prove to an auditor that a decision from March 3rd matches a re-run today. - Detect when a rule change has altered execution behavior (input hash matches but canonical trace hash differs → the rules diverged). - Confirm a Blueprint migration didn't change any observable outcomes. Args: api_key: GeodesicAI API key (starts with gai_) contract_a: A replay contract dict (the `replay_contract` field from a prior validate/execute_task response) contract_b: Another replay contract dict to compare against contract_a Returns: replay_match: bool — True if the top-level replay_hash matches (fully identical) contract_version_match: bool matches: dict of field_name → value, for every field that agreed mismatches: dict of field_name → {expected, actual}, for every field that disagreed summary: plain-English one-liner describing the result Interpretation of mismatches: - input_payload_hash: the two runs were fed different data - template_version: the Blueprint was upgraded between runs - solver_registry_hash: the platform itself changed between runs - canonical_trace_hash: same inputs and rules but different execution path (should never happen under determinism; indicates a platform bug) - graph_hash: DAG topology changed between runs
    Connector
  • Add one or more tasks to an event (task list). Supports bulk creation. IMPORTANT: Set response_type correctly — use "text" for info collection (names, phones, emails, notes), "photo" for visual verification (inspections, serial numbers, damage checks), "checkbox" only for simple confirmations. NOTE: To dispatch tasks to the Claude Code agent running on Mike's PC, use tascan_dispatch_to_agent instead — it routes directly to the agent's inbox with zero configuration needed.
    Connector
  • [tourradar] Search for tours by title using AI-powered semantic search. Returns a list of matching tour IDs and titles. Use this when you need to look up a tour by name. When you know tour id, use b2b-tour-details tool to display details about specific tour
    Connector
  • Semantic search across the Civis knowledge base of agent build logs. Returns the most relevant solutions for a given problem or query. Use the get_solution tool to retrieve the full solution text for a specific result. Tip: include specific technology names in your query for better results.
    Connector
  • Retrieve detailed information for a specific FDIC-insured institution using its FDIC Certificate Number (CERT). Use this when you know the exact CERT number for an institution. To find a CERT number, use fdic_search_institutions first. Args: - cert (number): FDIC Certificate Number (e.g., 3511 for Bank of America) - fields (string, optional): Comma-separated list of fields to return Returns a detailed institution profile suitable for concise summaries, with structured fields available for exact values when needed.
    Connector
  • Discover AXIS install metadata, pricing, and shareable manifests for commerce-capable agents. Free, no auth, and no mutation beyond read access. Example: call before wiring AXIS into Claude Desktop, Cursor, or VS Code. Use this when you need onboarding and ecosystem setup details. Use search_and_discover_tools instead for keyword routing or discover_agentic_purchasing_needs for purchasing-task triage.
    Connector
  • ⚠️ MANDATORY FIRST STEP - Call this tool BEFORE using any other Canvs tools! Returns comprehensive instructions for creating whiteboards: tool selection strategy, iterative workflow, and examples. Following these instructions ensures correct diagrams.
    Connector
  • Analyze deposit market share and concentration for an MSA or city market using FDIC Summary of Deposits (SOD) data. Computes market share for all institutions in a geographic market, ranks them by deposits, and calculates the Herfindahl-Hirschman Index (HHI) for market concentration analysis per DOJ/FTC merger guidelines. Two entry modes: - MSA market: provide msa as the numeric MSABR code (e.g., msa: 19100 for Dallas-Fort Worth-Arlington, msa: 42660 for Seattle-Tacoma-Bellevue). Use fdic_search_sod to look up MSABR codes. - City market: provide city (branch city name, e.g., "Austin") and state (two-letter code, e.g., "TX"). Output includes: - Market overview with total deposits, institution count, and HHI classification - Optional highlighted institution showing rank and share (provide cert) - Top institutions ranked by deposit market share - Structured JSON for programmatic consumption Requires at least one of: msa (numeric MSABR code), or city + state.
    Connector
  • Roll (regenerate) the personal proxy credential for a firewall. This invalidates the previous password and returns a new one with ready-to-use configuration commands. Only call this when the user explicitly needs new credentials — it will break any existing package manager configuration using the old password.
    Connector
  • Route a computable problem to the appropriate deterministic solver. No Blueprint required. No LLM tokens consumed. Solves problems across 9 domains: - Geometry: distances between points, midpoints, angles between vectors, triangle areas - Algebra: matrix determinants, matrix inverses, linear systems - Logic: boolean evaluation, implication chains, deduction from premises - Inference: derive conclusions from stated premises - Consistency: check whether a set of statements is internally consistent - PDE: partial differential equations, Laplacians, diffusion equations - Spectral: FFT, eigenvalue computation, frequency analysis - Physics/DRM: curvature flow, field stability, manifold analysis, energy diagnostics - Semantic: lightweight grounding and intent routing (bridge solver) If the problem cannot be solved deterministically, returns not_deterministic: true. The agent should then use its own LLM for that problem. Args: api_key: GeodesicAI API key (starts with gai_) query: The problem to solve (natural language or symbolic notation)
    Connector
  • Starts a crawl job on a website and extracts content from all pages. **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage. **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. **Common mistakes:** Setting limit or maxDiscoveryDepth too high (causes token overflow) or too low (causes missing pages); using crawl for a single page (use scrape instead). Using a /* wildcard is not recommended. **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog." **Usage Example:** ```json { "name": "firecrawl_crawl", "arguments": { "url": "https://example.com/blog/*", "maxDiscoveryDepth": 5, "limit": 20, "allowExternalLinks": false, "deduplicateSimilarURLs": true, "sitemap": "include" } } ``` **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress. **Safe Mode:** Read-only crawling. Webhooks and interactive actions are disabled for security.
    Connector
  • Get a report on source domain visibility and citations across AI search engines. Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns. Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns: - domain: the source domain (e.g. "example.com") - classification: domain type — CORPORATE (official company sites), EDITORIAL (news, blogs, magazines), INSTITUTIONAL (government, education, nonprofit), UGC (social media, forums, communities), REFERENCE (encyclopedias, documentation), COMPETITOR (direct competitors), OWN (the user's own domains), OTHER, or null - retrieved_percentage: 0–1 ratio — fraction of chats that included at least one URL from this domain. 0.30 means 30% of chats. - retrieval_rate: average number of URLs from this domain pulled per chat. Can exceed 1.0 — values above 1.0 mean multiple pages from the same domain are retrieved per conversation. - citation_rate: average number of inline citations when this domain is retrieved. Can exceed 1.0 — higher values indicate stronger content authority. - mentioned_brand_ids: array of brand IDs mentioned alongside URLs from this domain (may be empty) When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, tag_id, topic_id, chat_id, date, country_code. Dimensions explained: - prompt_id: individual search queries/prompts - model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4) - tag_id: custom user-defined tags - topic_id: topic groupings - date: (YYYY-MM-DD format) - country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE") - chat_id: individual AI chat/conversation ID Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id, tag_id, topic_id, prompt_id, domain, url, country_code, chat_id, mentioned_brand_id. Additional filters: - mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — filter by number of unique brands mentioned. - gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — gap analysis filter. Excludes domains where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns domains where the own brand is absent but at least 2 competitors are mentioned.
    Connector
  • Search for data rows in a dataset using full-text search (query) or precise column filters. Returns matching rows and a filtered view URL. Use to retrieve individual rows. Do NOT use to compute statistics — use calculate_metric or aggregate_data instead.
    Connector
  • Get a report on source URL visibility and citations across AI search engines. Results are aggregated for the entire date range by default. Use the "date" dimension for daily breakdowns. Returns columnar JSON: {columns, rows, rowCount}. Each row is an array of values matching column order. Columns: - url: the full source URL (e.g. "https://example.com/page") - classification: page type — HOMEPAGE, CATEGORY_PAGE, PRODUCT_PAGE, LISTICLE (list-structured articles), COMPARISON (product/service comparisons), PROFILE (directory entries like G2 or Yelp), ALTERNATIVE (alternatives-to articles), DISCUSSION (forums, comment threads), HOW_TO_GUIDE, ARTICLE (general editorial content), OTHER, or null - title: page title or null - channel_title: channel or author name (e.g. YouTube channel, subreddit) or null - citation_count: total number of explicit citations across all chats - retrievals: total number of times this URL was used as a source, regardless of whether it was cited - citation_rate: average number of inline citations per chat when this URL is retrieved. Can exceed 1.0 — higher values indicate more authoritative content. - mentioned_brand_ids: array of brand IDs mentioned alongside this URL (may be empty) When dimensions are selected, rows also include the relevant dimension columns: prompt_id, model_id, tag_id, topic_id, chat_id, date, country_code. Dimensions explained: - prompt_id: individual search queries/prompts - model_id: AI search engine (e.g. chatgpt-scraper, gpt-4o, gpt-4o-search, gpt-3.5-turbo, llama-sonar, perplexity-scraper, sonar, gemini-2.5-flash, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, llama-3.3-70b-instruct, deepseek-r1, claude-3.5-haiku, claude-haiku-4.5, claude-sonnet-4, grok-scraper, microsoft-copilot-scraper, grok-4) - tag_id: custom user-defined tags - topic_id: topic groupings - date: (YYYY-MM-DD format) - country_code: country (ISO 3166-1 alpha-2, e.g. "US", "DE") - chat_id: individual AI chat/conversation ID Filters use {field, operator, values} where operator is "in" or "not_in". Filterable fields: model_id, tag_id, topic_id, prompt_id, domain, url, country_code, chat_id, mentioned_brand_id. Additional filters: - mentioned_brand_count: {field: "mentioned_brand_count", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — filter by number of unique brands mentioned. - gap: {field: "gap", operator: "gt"|"gte"|"lt"|"lte", value: <number>} — gap analysis filter. Excludes URLs where the project's own brand is mentioned, and filters by the number of competitor brands present. Example: {field: "gap", operator: "gte", value: 2} returns URLs where the own brand is absent but at least 2 competitors are mentioned.
    Connector
  • Upload an asset (image, font, PDF, etc). Provide either content (base64) OR source_url (public HTTPS URL) — not both. Using source_url is recommended for images from DALL-E, Unsplash, or other URLs — it saves tokens and is more reliable. Set overwrite: true to replace an existing asset.
    Connector
  • Generate industry-standard documentation for any project using SUMA graph memory. This tool does NOT fabricate. It retrieves real war stories, architecture rulings, and deployment facts from the K-WIL graph, then uses Gemini to render them as professional documentation. The graph IS the source of truth — suma_doc makes it readable. Why this beats a generic doc generator: Generic: "Here is how to install." (stateless, stale, hallucinated) suma_doc: "We chose REST over MCP because [Architect Ruling Apr 5]. Here is how it works in production: [real deployment from graph]. Avoid X — we tried it and [root cause]." Args: prompt: What documentation to generate. Be specific. Examples: "Write a README for the SUMA MCP Server API" "Generate an ARCHITECTURE.md explaining the ring_search algorithm" "Write a CHANGELOG entry for today's /api/wakeup deployment" "Create an API reference for /api/ingest and /api/search" "Write an onboarding guide for a new backend engineer joining the QMS team" project: Optional filter to narrow graph search to a specific product. Examples: "suma-mcp", "squad-qms", "squad-ghostgate", "squad-companion" doc_type: Optional hint to shape output format. "readme" → GitHub README with badges + sections "architecture" → Design doc with decisions, trade-offs, diagrams description "api_reference" → Endpoint table + request/response examples "changelog" → Conventional Commits format, grouped by type "onboarding" → Step-by-step guide for a new engineer "runbook" → Ops runbook with commands, failure modes, escalation If omitted, Gemini infers the best format from the prompt. Returns: document: The generated documentation (markdown) nodes_used: Number of graph nodes retrieved as source material source_summary: Brief description of what the graph provided doc_type_detected: What format was generated
    Connector
  • Search for data rows in a dataset using full-text search (query) or precise column filters. Returns matching rows and a filtered view URL. Use to retrieve individual rows. Do NOT use to compute statistics — use calculate_metric or aggregate_data instead.
    Connector
  • Register your agent to start contributing. Call this ONCE on first use. After registering, save the returned api_key to ~/.agents-overflow-key then call authenticate(api_key=...) to start your session. agent_name: A creative, fun display name for your agent. BE CREATIVE — combine your platform/model with something fun and unique! Good examples: 'Gemini-Galaxy', 'Claude-Catalyst', 'Cursor-Commander', 'Jetson-Jedi', 'Antigrav-Ace', 'Copilot-Comet', 'Nova-Navigator' BAD (too generic): 'DevBot', 'CodeHelper', 'Assistant', 'Antigravity', 'Claude' DO NOT just use your platform name or a generic word. Be playful! platform: Your platform — one of: antigravity, claude_code, cursor, windsurf, copilot, other
    Connector
  • The unit tests (code examples) for HMR. Always call `learn-hmr-basics` and `view-hmr-core-sources` to learn the core functionality before calling this tool. These files are the unit tests for the HMR library, which demonstrate the best practices and common coding patterns of using the library. You should use this tool when you need to write some code using the HMR library (maybe for reactive programming or implementing some integration). The response is identical to the MCP resource with the same name. Only use it once and prefer this tool to that resource if you can choose.
    Connector
  • List detailed execution options with pricing, duration, and proof types for physical-world tasks. Omit categoryId to get ALL capabilities across every category in one response — useful for semantic search by name/description when you are not sure which category fits. Pass a categoryId (from list_service_categories) to narrow down to one category. Use this to understand what proof you'll receive before dispatching a task. No authentication required. Next: dispatch_physical_task.
    Connector
  • # Instructions 1. Query OpenTelemetry metrics stored in Axiom using MPL (Metrics Processing Language). NOT APL. 2. The query targets a metrics dataset (kind "otel-metrics-v1"). 3. Use listMetrics() to discover available metric names in a dataset before querying. 4. Use listMetricTags() and getMetricTagValues() to discover filtering dimensions. 5. ALWAYS restrict the time range to the smallest possible range that meets your needs. 6. NEVER guess metric names or tag values. Always discover them first. # MPL Query Syntax A query has three parts: source, filtering, and transformation. Filters must appear before transformations. ## Source ``` <dataset>:<metric> ``` Backtick-escape identifiers containing special characters: ``my-dataset``:``http.server.duration`` ## Filtering (where) Chain filters with `|`. Use `where` (not `filter`, which is deprecated). ``` | where <tag> <op> <value> ``` Operators: ==, !=, >, <, >=, <= Values: "string", 42, 42.0, true, /regexp/ Combine with: and, or, not, parentheses ## Transformations ### Aggregation (align) — aggregate data over time windows ``` | align to <interval> using <function> ``` Functions: avg, sum, min, max, count, last Intervals: 5m, 1h, 1d, etc. ### Grouping (group) — group series by tags ``` | group by <tag1>, <tag2> using <function> ``` Functions: avg, sum, min, max, count Without `by`: combines all series: `| group using sum` ### Mapping (map) — transform values in place ``` | map rate // per-second rate of change | map increase // increase between datapoints | map + 5 // arithmetic: +, -, *, / | map abs // absolute value | map fill::prev // fill gaps with previous value | map fill::const(0) // fill gaps with constant | map filter::lt(0.4) // remove datapoints >= 0.4 | map filter::gt(100) // remove datapoints <= 100 | map is::gte(0.5) // set to 1.0 if >= 0.5, else 0.0 ``` ### Computation (compute) — combine two metrics ``` ( `dataset`:`errors_total` | group using sum, `dataset`:`requests_total` | group using sum; ) | compute error_rate using / ``` Functions: +, -, *, /, min, max, avg ### Bucketing (bucket) — for histograms ``` | bucket by method, path to 5m using histogram(count, 0.5, 0.9, 0.99) | bucket by method to 5m using interpolate_delta_histogram(0.90, 0.99) | bucket by method to 5m using interpolate_cumulative_histogram(rate, 0.90, 0.99) ``` ### Prometheus compatibility ``` | align to 5m using prom::rate // Prometheus-style rate ``` ## Identifiers Use backticks for names with special characters: ``my-dataset``, ``service.name``, ``http.request.duration`` # Examples Basic query: `my-metrics`:`http.server.duration` | align to 5m using avg Filtered: `my-metrics`:`http.server.duration` | where `service.name` == "frontend" | align to 5m using avg Grouped: `my-metrics`:`http.server.duration` | align to 5m using avg | group by endpoint using sum Rate: `my-metrics`:`http.requests.total` | align to 5m using prom::rate | group by method, path, code using sum Error rate (compute): ( `my-metrics`:`http.requests.total` | where code >= 400 | group by method, path using sum, `my-metrics`:`http.requests.total` | group by method, path using sum; ) | compute error_rate using / | align to 5m using avg SLI (error budget): ( `my-metrics`:`http.requests.total` | where code >= 500 | align to 1h using prom::rate | group using sum, `my-metrics`:`http.requests.total` | align to 1h using prom::rate | group using sum; ) | compute error_rate using / | map is::lt(0.2) | align to 7d using avg Histogram percentiles: `my-metrics`:`http.request.duration.seconds.bucket` | bucket by method, path to 5m using interpolate_delta_histogram(0.90, 0.99) Fill gaps: `my-metrics`:`cpu.usage` | map fill::prev | align to 1m using avg
    Connector
  • Search for round-trip flights using Google Flights. Returns flight options with airlines, departure/arrival times, prices, and booking information. **Workflow for selecting flights:** 1. Search with departure_id, arrival_id, outbound_date, and return_date to get outbound flight options 2. Each outbound flight includes a departure_token 3. Call again with departure_token to see return flight options for that outbound flight 4. Selected flight pairs include a booking_token for final booking details For one-way flights, use google_flights_one_way instead. For flexible date searches, use google_flights_calendar_round_trip to find the cheapest date combinations first.
    Connector