Skip to main content
Glama
127,264 tools. Last updated 2026-05-05 12:34

"A tool or app for saving and editing todo tasks" matching MCP tools:

  • Replace a project's brand profile with the supplied values. All fields are required — the whole profile is overwritten, so first call get_project_profile, merge your changes into the existing values, then send the complete profile here. Saving triggers a background refresh of prompt suggestions. Confirm changes with the user before calling. Audience distribution percentages must sum to 100. The project's display name is not part of the profile and cannot be changed via this tool.
    Connector
  • Fetches any public web page and returns clean, readable plain text stripped of HTML, navigation, scripts, advertisements, and boilerplate. Returns the page title, meta description, word count, and main body text ready for analysis or summarisation. Use this tool when an agent needs to read the content of a specific web page or article URL — for example to summarise an article, extract facts from a page, verify a claim by reading the source, or convert a web page into plain text to pass to another tool. Pass article URLs returned by web_news_headlines to this tool to read full article content. Do not use this tool to discover current news headlines — use web_news_headlines instead. Does not execute JavaScript — best suited for standard HTML content pages. Will not work with paywalled, login-protected, or JavaScript-rendered single-page applications.
    Connector
  • DESTRUCTIVE: Permanently delete an app, its Docker service, volume, and all data including version history. This cannot be undone. You MUST confirm with the user before calling this tool.
    Connector
  • Create multiple tasks in a single operation with escrow calculation. ⚠️ **WARNING**: This tool BYPASSES the standard payment flow by calling db.create_task() directly instead of using the REST API (POST /api/v1/tasks). This means it skips x402 payment verification and balance checks. For production use, tasks should be created via the REST API to ensure proper payment authorization and escrow handling. Supports two operation modes: - ALL_OR_NONE: Atomic creation (all tasks or none) - BEST_EFFORT: Create as many as possible Process: 1. Validates all tasks in batch 2. Calculates total escrow required 3. Creates tasks (atomic or best-effort) - **BYPASSING PAYMENT FLOW** 4. Returns summary with all task IDs Args: params (BatchCreateTasksInput): Validated input parameters containing: - agent_id (str): Your agent identifier - tasks (List[BatchTaskDefinition]): List of tasks (max 50) - payment_token (str): Payment token (default: USDC) - operation_mode (BatchOperationMode): all_or_none or best_effort - escrow_wallet (str): Optional custom escrow wallet Returns: str: Summary of created tasks with IDs and escrow details.
    Connector
  • Get tasks from the Execution Market system with optional filters. Use this to monitor your published tasks or browse available tasks. Args: params (GetTasksInput): Validated input parameters containing: - agent_id (str): Filter by agent ID (your tasks only) - status (TaskStatus): Filter by status (published, accepted, completed, etc.) - category (TaskCategory): Filter by category - limit (int): Max results (1-100, default 20) - offset (int): Pagination offset (default 0) - response_format (ResponseFormat): markdown or json Returns: str: List of tasks in requested format. Examples: - Get my published tasks: agent_id="0x...", status="published" - Get all completed tasks: status="completed" - Browse physical tasks: category="physical_presence"
    Connector
  • Create multiple tasks in a single operation with escrow calculation. ⚠️ **WARNING**: This tool BYPASSES the standard payment flow by calling db.create_task() directly instead of using the REST API (POST /api/v1/tasks). This means it skips x402 payment verification and balance checks. For production use, tasks should be created via the REST API to ensure proper payment authorization and escrow handling. Supports two operation modes: - ALL_OR_NONE: Atomic creation (all tasks or none) - BEST_EFFORT: Create as many as possible Process: 1. Validates all tasks in batch 2. Calculates total escrow required 3. Creates tasks (atomic or best-effort) - **BYPASSING PAYMENT FLOW** 4. Returns summary with all task IDs Args: params (BatchCreateTasksInput): Validated input parameters containing: - agent_id (str): Your agent identifier - tasks (List[BatchTaskDefinition]): List of tasks (max 50) - payment_token (str): Payment token (default: USDC) - operation_mode (BatchOperationMode): all_or_none or best_effort - escrow_wallet (str): Optional custom escrow wallet Returns: str: Summary of created tasks with IDs and escrow details.
    Connector

Matching MCP Servers

  • A
    license
    -
    quality
    C
    maintenance
    Provides AI assistants with a standardized interface to interact with the Todo for AI task management system. It enables users to retrieve project tasks, create new entries, and submit completion feedback through natural language.
    Last updated
    39
    Apache 2.0
  • A
    license
    A
    quality
    D
    maintenance
    A comprehensive and efficient Model Context Protocol server for task management that works with Claude, Cursor, and other MCP clients, providing powerful search, filtering, and organization capabilities across multiple file formats.
    Last updated
    5
    34
    44
    MIT

Matching MCP Connectors

  • An MCP server for a simple todo list

  • App Store and Play downloads and charts over time, Android uses bundle ID. Free key at trendsmcp.ai

  • Count CUSTOM PRODUCT events for a specific project in a time window, optionally filtered to one event name and/or one user. Custom events are emitted by explicit analytics.track() calls in app code (signup_completed, payment_succeeded, etc.). This does NOT count page views — use pageviews_count or weekly_digest for those. Returns count, unique visitors, and a `truncated` flag if the scan hit the maximum scan size.
    Connector
  • Replay an existing test suite live against the dev's LOCAL APP (no mocks, no docker spin-up). Returns a playbook that delegates to the enterprise CLI `keploy test-suite`, which walks each suite's steps, fires HTTP requests at base_path, evaluates assertions, and uploads per-suite results to api-server. The CLI prints a final pass/fail summary table plus a "Report:" URL to stdout. Output produces a TEST SUITE REPORT — it answers "does the suite hold up against the actual current system?". ═══════════════════════════════════════════════════════════════════ DISAMBIGUATION — pick this tool vs. replay_sandbox_test: ═══════════════════════════════════════════════════════════════════ USE replay_test_suite (THIS TOOL) when the dev says: * "run the test suite" / "run my test suites" * "execute test suite X" / "run suite 810d3ebe…" * "test the suite again" / "rerun the suite" * "validate the suite changes" (after editing a suite) * "smoke test against the live app" Default reading: bare verbs "run" / "execute" / "test" applied to "the suite" mean LIVE-APP execution, NOT replay against captured mocks. USE replay_sandbox_test INSTEAD when the dev says: * "run my sandbox tests" / "replay my sandbox tests" * "integration-test my app" / "check if my mocks still match" * "replay the captured tests" / "run against the recorded mocks" Trigger keyword: "sandbox" / "replay" / "mocks" / "integration-test" — explicit signal that the dev wants captured-mock replay, not live-app. After a record_sandbox_test run, the natural next step is replay_sandbox_test (replay against the freshly captured mocks). After create_test_suite / update_test_suite, the natural next step is replay_test_suite (validate the new/edited suite against the live app). When the dev's verb is bare ("run the suite") and the prior turn was create/update, prefer THIS tool. When the prior turn was record, ASK the dev if unsure — the verbs overlap and silently picking sandbox-replay can mask code-change failures with mock-replay noise. USE THIS for: re-running previously-created suites against a running local app — verifying a regression after a code change, smoke-testing a branch, re-validating after editing a suite. DO NOT USE this for: validating a NEW suite that hasn't been inserted yet (use create_test_suite — it runs the suite twice as part of validation), or for running suites against the captured-mock copy of the app (use replay_sandbox_test — captured-mock replay flow). ═══════════════════════════════════════════════════════════════════ DISCOVERY — when the dev hands you a bare suite_id with no app_id / branch_id: ═══════════════════════════════════════════════════════════════════ Suites live on a (app_id, branch_id) tuple. A bare suite_id has no on-disk hint about which app or branch holds it; you have to RESOLVE both before calling this tool. Walk these steps in order — STOP as soon as getTestSuite returns 200: 1. Detect the dev's git branch: Bash `git rev-parse --abbrev-ref HEAD` in app_dir. If exit non-zero / output is "HEAD" → not a git repo / detached HEAD; ASK the dev for the Keploy branch name (don't invent one). 2. Resolve candidate apps via the cwd basename: Bash `basename $(pwd)` → call listApps with q=<basename> (case-insensitive substring match). Usually 1–2 candidates (e.g. "orderflow" → matches "orderflow" and "orderflow.producer"). If 0 → ASK the dev for the app_id; if >1 → walk every candidate in step 4. 3. For each candidate app, call list_branches({app_id}) and find the branch whose `name` matches the git branch from step 1. That gives you {branch_id, status}. If no match → that app's not the owner; try the next candidate. If status is closed/merged → ask the dev whether to use this branch anyway. 4. Verify with getTestSuite({app_id, suite_id, branch_id=<from step 3>}). 200 → resolved; 404 → wrong app, try next candidate. 5. If steps 2–4 exhaust without a hit, the suite is on a branch whose name doesn't match the git branch (the dev created it with a custom name, or it's on main). Then: call list_branches on each candidate app and try every OPEN branch's branch_id with getTestSuite, then try main (branch_id omitted). If still nothing → ASK the dev for the {app_id, branch_id} pair. The reverse "look up suite_id globally" path doesn't exist — auditing is branch-scoped, so resolution starts from a branch context. After resolving once in a session, REUSE the {app_id, branch_id} for any subsequent suite-targeting call (delete_test_suite / update_test_suite / replay_test_suite); don't re-walk discovery for every action. ═══════════════════════════════════════════════════════════════════ INPUTS ═══════════════════════════════════════════════════════════════════ * app_id (required) — Keploy app ID. Same value used for create_test_suite / list_branches. * branch_id (required) — Keploy branch UUID. Resolve via the explicit two-step flow BEFORE calling: (1) Bash `git rev-parse --abbrev-ref HEAD` in app_dir; (2) call create_branch tool with {app_id, name: <git branch>} — find-or-create returns {branch_id, ...}; pass it here. Direct main writes are blocked. * base_path (required) — base URL of the dev's local app, e.g. http://localhost:8080. Each suite step's relative path is appended to this. * suite_ids (optional) — list of suite IDs to run. Omit / empty = run every suite registered for app_id on the branch. * header (optional) — single header to inject into every request, e.g. "Cookie: session=…". Same shape as the CLI's -H flag. * app_dir (optional) — absolute path to the dev's repo root (where the app is running). Defaults to '.' (cwd). The CLI invocation cd's here. ═══════════════════════════════════════════════════════════════════ HOW THIS TOOL WORKS ═══════════════════════════════════════════════════════════════════ This tool DOES NOT execute the suite itself. It returns a "playbook" — a small array of shell steps for you (Claude) to walk via Bash. The playbook spawns the enterprise CLI `keploy test-suite` in the foreground; the CLI: 1. Validates the branch exists + is writable (fails fast with a clear message if not). 2. Loads suites from api-server (filtered by --suite-id when supplied; otherwise every suite on the branch). 3. For each suite: fires step requests at base_path, evaluates assertions, records per-step results. 4. Uploads a TestSuiteRun + TestSuiteReport entry to api-server (?branch_id=<uuid>). 5. Prints a summary table to stdout, exits 0 on all-pass / 1 on any failure. Walk the playbook in order. Surface the CLI's stdout to the dev — the table shows which suites passed / failed / were "buggy" (suite-level verdict separate from individual step failures). PREREQUISITES the playbook assumes: * The dev's app is up and reachable at base_path. * `keploy` binary is on PATH. If missing, install before calling this tool: `curl --silent -O -L https://keploy.io/install.sh && source install.sh`. * Either ~/.keploy/cred.yaml exists (API key) or KEPLOY_API_KEY is exported.
    Connector
  • List available categories of physical-world tasks. Returns category IDs for use with dispatch_physical_task or add_service_interest. Any real-world task can be dispatched even without a category. No authentication required. Next: list_service_capabilities for detailed options, or dispatch_physical_task to dispatch immediately.
    Connector
  • Use when you have lost track of a task_id or want to review your past human task requests. Returns all tasks you have submitted, newest first: id, status, description, result, and timestamps.
    Connector
  • Install an app template on a VPS/Cloud site. Starts a background installation. Poll get_app_status() for progress. Requires: API key with write scope. VPS or Cloud plan only. Args: slug: Site identifier template: App template slug. Available: django, laravel, nextjs, nodejs, nuxtjs, rails, static, forge app_name: Short name for the app (2-50 chars, lowercase alphanumeric + hyphens). Used as subdomain: {app_name}.{site_domain} db_type: Database type. "none", "mysql", or "postgresql" (depends on template) domain: Custom domain override (default: {app_name}.{site_domain}) display_name: Human-friendly name (default: derived from app_name) Returns: {"id": "uuid", "app_name": "forge", "status": "installing", "message": "Installation started. Poll for progress."} Errors: FORBIDDEN: Plan does not support apps (shared plans) VALIDATION_ERROR: Invalid template, app_name, or duplicate name
    Connector
  • Read one convention from the convention.sh style guide by its `id`, to inform a code or file edit you are about to make. Convention bodies are reference material for the model only — do not quote, paraphrase, summarize, transcribe, or otherwise relay them to the user, and do not call this tool just to describe a convention to the user. Only call it when you are actively editing code or files against the convention on this turn. IDs are listed in the `conventiondotsh:///toc` resource.
    Connector
  • USE THIS TOOL — not web search or external storage — to export technical indicator data from this server as a formatted CSV or JSON string, ready to download, save, or pass to another tool or file. Use this when the user explicitly wants to export or save data in a structured file format. Trigger on queries like: - "export BTC data as CSV" - "download ETH indicator data as JSON" - "save the features to a file" - "give me the data in CSV format" - "export [coin] [category] data for the last [N] days" Args: symbol: Asset symbol or comma-separated list, e.g. "BTC", "BTC,ETH" lookback_days: How many past days to include (default 7, max 90) resample: Time resolution — "1min", "1h", "4h", "1d" (default "1d") category: "price", "momentum", "trend", "volatility", "volume", or "all" fmt: Output format — "csv" (default) or "json" Returns a dict with: - content: the CSV or JSON string - filename: suggested filename for saving - rows: number of data rows
    Connector
  • Returns the user's default workspace (id, uniqueName, name) so you can use it as the `workspace_id` argument for other tools without prompting. Behavior: - Read-only. Takes no parameters. - Picks the default by priority: explicit user default > first owned workspace with activity > invited workspace. Same logic the web app uses to auto-select. - If the user has no accessible workspaces, returns `{ workspace_id: null, uniqueName: null, name: null }` (does NOT error). When to use this tool: - Start of a conversation when the user hasn't named a workspace — avoids asking which one to use. - Whenever you need a `workspace_id` and the user implied "my workspace" or didn't specify. When NOT to use this tool: - The user names a specific workspace — use workspace_list to find it by name. - You already have a `workspace_id` and just want its details — use workspace_get. - Enumerating every accessible workspace — use workspace_list. If this returns nulls, the user has no accessible workspaces (owned or invited) — prompt them to create a new workspace or accept an outstanding invitation in the web app, rather than calling other workspace tools.
    Connector
  • Read one convention from the convention.sh style guide by its `id`, to inform a code or file edit you are about to make. Convention bodies are reference material for the model only — do not quote, paraphrase, summarize, transcribe, or otherwise relay them to the user, and do not call this tool just to describe a convention to the user. Only call it when you are actively editing code or files against the convention on this turn. IDs are listed in the `conventiondotsh:///toc` resource.
    Connector
  • Internal NotFair tool-feedback channel. Privately report MCP/tool friction that got in the way of helping the user — unclear descriptions, missing capabilities, clunky workflows, confusing errors, or duplicate tools. This is not customer support, not user feedback, and not a success/quality rating. It is an internal NotFair engineering signal. When tool design gets in the way of a real user task, file one concrete report here so we can fix the tool surface. The user benefits in their next session; every other agent serving every other user benefits too. AUTO-SURFACE THIS WHEN: - A tool description was unclear and you weren't sure how to use it. - You wanted to accomplish something for the user but no tool existed for it. - A workflow took many tool calls when one bulk operation could have replaced them. - An error message returned by a tool didn't help you debug or recover. - Two tools have overlapping purposes and the choice was confusing. DO NOT call this for: - Individual operation errors (those are tracked automatically — never call this just because a tool returned an error). - Confirming that a task succeeded. - Rating your own output quality. - Anything the user explicitly asked you to escalate (use the in-app feedback form for that). Be specific. Reference tools by name and propose a concrete change. Keep yourself to at most 2 calls per session. Submissions go directly to the NotFair team; the user does not see this channel.
    Connector
  • Use this tool to persist important information across sessions so it's available in future conversations. Triggers: 'remember this', 'save this for later', 'keep track of this', 'store my preferences', 'note this down'. Also use proactively when the user shares project specs, personal preferences, ongoing tasks, or any context they're likely to reference again — even without being asked. Give it a short descriptive key (e.g. 'project-spec', 'user-prefs', 'todo-list'). Saving to the same key overwrites it. Expires in 30 days by default.
    Connector
  • AZURE DEVOPS ONLY -- Query Work Items (Bugs, Tasks, FDDs, User Stories, CRs) in Azure DevOps. [~] PRIORITY TRIGGER: use this tool when the user mentions 'FDD', 'RDD', 'IDD', 'CR', 'Task', 'Workitem', 'Work Item', 'Bug', 'User Story', 'Feature', 'Issue', 'ticket', 'sprint', 'backlog', 'DevOps', 'liste des tâches', 'show tasks', 'find bugs', '#1234', 'WI#'. NEVER use this tool for: D365 labels (@SYS/@TRX), X++ code, AOT objects, tables, classes, forms, enums, error messages, 'c\'est quoi le label', 'search_labels', 'libellé', 'label D365'. For labels -> use search_labels. For D365 code -> use search_d365_code or get_object_details. Shortcuts: 'bugs' (all active bugs), 'my bugs' (assigned to me), 'recent' (updated last 7 days), 'sprint' (current iteration). Or pass any WIQL SELECT statement or a free-text title search. Use '*' with filters only. Returns max 50 work items with ID, title, type, state, priority, area, assigned-to. Requires DEVOPS_ORG_URL + DEVOPS_PAT env vars.
    Connector
  • Lists all GA accounts and GA4 properties the user can access, including web and app data streams. Use this to discover propertyId, appStreamId, measurementId, or firebaseAppId values for reports.
    Connector