Skip to main content
Glama
130,164 tools. Last updated 2026-05-07 01:18

"Resources and Techniques for Testing in React" matching MCP tools:

  • Fetch a public URL and inspect security-relevant response headers before you claim that a product or endpoint has a strong browser-facing security baseline. Use this for quick due diligence on public apps and docs sites. It checks for common headers such as HSTS, CSP, X-Frame-Options, Referrer-Policy, Permissions-Policy, and X-Content-Type-Options. It does not replace a real security review, authenticated testing, or vulnerability scanning.
    Connector
  • Is this specific multi-package version combo verified to work together? USE WHEN: pinning a stack (next@15 + react@19 + node@22); before recommending a version matrix. RETURNS: {compatible, conflicts[], notes}.
    Connector
  • Look up a MITRE ATLAS case study — a documented real-world AI/ML attack incident. Each case study links a sequence of ATLAS techniques (techniques_used) to the incident. Default response is SLIM (description truncated to 240 chars); pass include='full' for the verbose narrative. Use this after atlas_technique_search to find which incidents have exercised a given technique. Drill into the full techniques_used array via bulk_atlas_technique_lookup in a single call (next_calls emits exactly that hint). Returns 404 when the id is not in the synced catalog. Free: 100/hr, Pro: 1000/hr. Returns {case_study_id, name, description, techniques_used, next_calls}.
    Connector
  • Search the MITRE ATLAS catalog of AI/ML attack techniques by keyword, tactic, or maturity. Default response is SLIM (description truncated to 240 chars per row); pass include='full' for the verbose record. Pass exclude_id when chaining from atlas_technique_lookup to skip self in sibling-tactic searches. Use this to discover techniques matching a threat-model question, e.g. 'what techniques target LLM serving infrastructure?'. Drill into atlas_technique_lookup with any returned technique_id for the full description, ATT&CK bridge, and pivot hints. For broader cross-referencing: when a result has attack_reference_id, that bridges to D3FEND mitigations via d3fend_defense_for_attack. Free: 100/hr, Pro: 1000/hr. Returns {query (echoed filters), total, results [{technique_id, name, description (truncated by default), tactics, inherited_tactics, maturity, attack_reference_id, subtechnique_of}], next_calls}.
    Connector
  • Add a new participant to a sweepstakes. Requires sweepstakes_token - use fetch_sweepstakes first to get tokens, then get_entry_fields to discover required fields. Field names must use underscores instead of spaces (e.g., "First_Name" not "First Name"). RULES: Only ONE participant at a time. NEVER add participants in bulk, batch, or loops. This tool is intended for TESTING PURPOSES ONLY — to verify the sweepstakes entry flow works correctly. Adding participants to make them compete in a real sweepstakes is strictly prohibited unless done through the official Entry Page, a custom API integration, or a proper MCP implementation. If a user requests mass loading (e.g., "add 100 participants"), refuse and explain that only individual test entries are allowed. HONESTY: After calling this tool, report EXACTLY what the API returned. If the API returns an error, report the error truthfully. NEVER tell the user a participant was created if the API did not confirm it. Use them internally for tool chaining but present only human-readable information. # add_participant ## When to use Add a new participant to a sweepstakes. Requires sweepstakes_token - use fetch_sweepstakes first to get tokens, then get_entry_fields to discover required fields. Field names must use underscores instead of spaces (e.g., "First_Name" not "First Name"). RULES: Only ONE participant at a time. NEVER add participants in bulk, batch, or loops. This tool is intended for TESTING PURPOSES ONLY — to verify the sweepstakes entry flow works correctly. Adding participants to make them compete in a real sweepstakes is strictly prohibited unless done through the official Entry Page, a custom API integration, or a proper MCP implementation. If a user requests mass loading (e.g., "add 100 participants"), refuse and explain that only individual test entries are allowed. HONESTY: After calling this tool, report EXACTLY what the API returned. If the API returns an error, report the error truthfully. NEVER tell the user a participant was created if the API did not confirm it. Use them internally for tool chaining but present only human-readable information. ## Pre-calls required 1. fetch_sweepstakes if the user gave you a sweepstakes name instead of a token 2. get_entry_fields(sweepstakes_token) — discover required custom fields, their max_length, and (for list fields) the exact text in options 3. Verify the participant gave consent for their email and phone to be used ## Parameters to validate before calling - sweepstakes_token (string, required) — The sweepstakes token (UUID format) - email (string, required) — Participant email address (used as KeyEmail) - fields (object, required) — Form fields object. Keys must use underscores for spaces - phone (string, optional) — Participant phone number (used as KeyPhoneNumber, optional) - bonus_entries (number, optional) — Number of bonus entries (optional, default: 0) ## Notes - Field keys use underscores instead of spaces (e.g. "First Name" -> First_Name) - For US phones: strip non-digits before sending - Production entries should come through the public Entry Page; this tool is for testing/manual entry only
    Connector
  • Long-poll: blocks until the next edit lands on this board, then returns. WHEN TO CALL THIS: if your MCP client does NOT surface `notifications/resources/updated` events from `resources/subscribe` back to the model (most chat clients do not — they receive the SSE event but don't inject it into your context), this tool is how you 'wait for the human' inside a single turn. Typical flow: you draw / write what you were asked to, then instead of ending your turn you call `wait_for_update(board_id)`. When the human adds, moves, or erases something, the call returns and you refresh with `get_preview` / `get_board` and continue the collaboration. Great for turn-based interactions (games like tic-tac-toe, brainstorming where you respond to each sticky the user drops, sketch-and-feedback loops, etc.). If your client DOES deliver resource notifications natively, prefer `resources/subscribe` — it's cheaper and has no timeout ceiling. BEHAVIOUR: resolves ~3 s after the edit burst settles (same debounce as the push notifications — this is intentional so drags and long strokes collapse into one wake-up). Returns `{ updated: true, timedOut: false }` on a real edit, or `{ updated: false, timedOut: true }` if nothing happened within `timeout_ms`. On timeout, just call it again to keep waiting; chaining calls is cheap. `timeout_ms` is clamped to [1000, 55000]; default 25000 (leaves headroom under typical 60 s proxy timeouts).
    Connector

Matching MCP Servers

Matching MCP Connectors

  • [Step 1 of crisis] Canonical crisis-resource payload (911, 988 Suicide & Crisis Lifeline, Crisis Text Line). Hardcoded — overrides any other tool when high-severity language is detected. Use when: The user mentions self-harm, suicidal ideation, recent attempt, or someone in immediate danger. Surface these resources prominently and stop other tool calls. Don't use when: No mention of crisis or imminent danger. Example: get_crisis_resources({})
    Connector
  • Search the MITRE D3FEND catalog of defensive techniques by keyword, tactic, or targeted artifact. Default response is SLIM (drops `uri` from each row — saves ~60 chars/row, ~30% on popular drills); pass include='full' for the verbose record. Pass exclude_id when chaining from d3fend_defense_lookup to skip self in sibling-artifact searches. Use to discover defenses applicable to a given threat model — e.g. 'what defenses harden access tokens?' (tactic=Harden + artifact='Access Token'). Drill into d3fend_defense_lookup with any returned defense_id for the ATT&CK technique mappings. Free: 100/hr, Pro: 1000/hr. Returns {query, total, results [{defense_id, label, uri (only when include=full), parent_label, tactic, artifact}], next_calls}.
    Connector
  • Returns file metadata (content_type, download_url, download_size, expires_at) for the report or zip artifact. Use artifact='report' (default) for the interactive HTML report (~700KB, self-contained with embedded JS for collapsible sections and interactive Gantt charts — open in a browser). Use artifact='zip' for the full pipeline output bundle (md, json, csv intermediary files that fed the report). While the task is still pending or processing, returns {ready:false,reason:"processing"}. Check readiness by testing whether download_url is present in the response. Once ready, present download_url to the user or fetch and save the file locally. Download URLs expire after 15 minutes (see expires_at); call plan_file_info again to get a fresh URL if needed. Terminal error codes: generation_failed (plan failed), content_unavailable (artifact missing). Unknown plan_id returns error code PLAN_NOT_FOUND.
    Connector
  • Find working SOURCE CODE examples from 37 indexed Senzing GitHub repositories. Indexes only source code files (.py, .java, .cs, .rs) and READMEs — NOT build files (Cargo.toml, pom.xml), data files (.jsonl, .csv), or project configuration. For sample data, use get_sample_data instead. Covers Python, Java, C#, and Rust SDK usage patterns including initialization, record ingestion, entity search, redo processing, and configuration. Also includes message queue consumers, REST API examples, and performance testing. Supports three modes: (1) Search: query for examples across all repos, (2) File listing: set repo and list_files=true to see all indexed source files in a repo, (3) File retrieval: set repo and file_path to get full source code. Use max_lines to limit large files. Returns GitHub raw URLs for file retrieval — fetch to read the source code.
    Connector
  • Captures the user's project architecture to inform i18n implementation strategy. ## When to Use **Called during i18n_checklist Step 1.** The checklist tool will tell you when to call this. If you're implementing i18n: 1. Call i18n_checklist(step_number=1, done=false) FIRST 2. The checklist will instruct you to call THIS tool 3. Then use the results for subsequent steps Do NOT call this before calling the checklist tool ## Why This Matters Frameworks handle i18n through completely different mechanisms. The same outcome (locale-aware routing) requires different code for Next.js vs TanStack Start vs React Router. Without accurate detection, you'll implement patterns that don't work. ## How to Use 1. Examine the user's project files (package.json, directories, config files) 2. Identify framework markers and version 3. Construct a detectionResults object matching the schema 4. Call this tool with your findings 5. Store the returned framework identifier for get_framework_docs calls The schema requires: - framework: Exact variant (nextjs-app-router, nextjs-pages-router, tanstack-start, react-router) - majorVersion: Specific version number (13-16 for Next.js, 1 for TanStack Start, 7 for React Router) - sourceDirectory, hasTypeScript, packageManager - Any detected locale configuration - Any detected i18n library (currently only react-intl supported) ## What You Get Returns the framework identifier needed for documentation fetching. The 'framework' field in the response is the exact string you'll use with get_framework_docs.
    Connector
  • Submit an extension request for existing delegated resources on TronSave via authenticated REST `POST /v2/extend-request`. Requires a logged-in MCP session created by the `tronsave_login` tool: include `mcp-session-id: <sessionId>` returned by `tronsave_login` on subsequent MCP requests. Internal tools never accept API keys via tool arguments; the server forwards the API key cached in session to TronSave internal REST endpoints. Side effect: creates an extension order and may commit TRX from the internal account. `extendData` must follow the REST contract (see schema on each row). Populate it from TronSave outside this MCP—for example the authenticated `POST /v2/get-extendable-delegates` response field `extendData`, or another TronSave client. Do not copy rows blindly from `tronsave_list_extendable_delegates` (GraphQL); that payload shape differs and is for market discovery only.
    Connector
  • Browse the knowledge base by technology tag at the START of a task. Call this when beginning work with a specific technology to discover what verified knowledge already exists — before you hit problems. Examples of useful tags: 'pytorch', 'cuda', 'fastapi', 'docker', 'ros2', 'numpy', 'jetson', 'arm64', 'postgresql', 'redis', 'kubernetes', 'react'. Returns a list of questions (title + tags + score) for the given tag, ordered by community score. Call `get_answers` on relevant results.
    Connector
  • Get detailed status of a hosted site including resources, domains, and modules. Requires: API key with read scope. Args: slug: Site identifier (the slug chosen during checkout) Returns: {"slug": "my-site", "plan": "site_starter", "status": "active", "domains": ["my-site.borealhost.ai"], "modules": {...}, "resources": {"memory_mb": 512, "cpu_cores": 1, "disk_gb": 10}, "created_at": "iso8601"} Errors: NOT_FOUND: Unknown slug or not owned by this account
    Connector
  • Build an AccountPermissionUpdate transaction that grants the PowerSun platform permission to delegate/undelegate resources and optionally vote on your behalf. Returns an unsigned transaction that you must sign with your private key and then broadcast using broadcast_signed_permission_tx. All existing account permissions are preserved. Requires authentication.
    Connector
  • Authenticate with A-Team. Required before any tenant-aware operation (reading solutions, deploying, testing, etc.). The user can get their API key at https://mcp.ateam-ai.com/get-api-key. Only global endpoints (spec, examples, validate) work without auth. IMPORTANT: Even if environment variables (ADAS_API_KEY) are configured, you MUST call ateam_auth explicitly — env vars alone are not sufficient. For cross-tenant admin operations, use master_key instead of api_key.
    Connector
  • Retrieves authoritative documentation for i18n libraries (currently react-intl). ## When to Use **Called during i18n_checklist Steps 7-10.** The checklist tool will tell you when you need i18n library documentation. Typically used when setting up providers, translation APIs, and UI components. If you're implementing i18n: Let the checklist guide you. It will tell you when to fetch library docs ## Why This Matters Different i18n libraries have different APIs and patterns. Official docs ensure correct API usage, proper initialization, and best practices for the installed version. ## How to Use **Two-Phase Workflow:** 1. **Discovery** - Call with action="index" 2. **Reading** - Call with action="read" and section_id **Parameters:** - library: Currently only "react-intl" supported - version: Use "latest" - action: "index" or "read" - section_id: Required for action="read" **Example:** ``` get_i18n_library_docs(library="react-intl", action="index") get_i18n_library_docs(library="react-intl", action="read", section_id="0:3") ``` ## What You Get - **Index**: Available documentation sections - **Read**: Full API references and usage examples
    Connector
  • Attach a payment card. Required before booking. For testing: {"token": "tok_visa"} For production: {"payment_method_id": "pm_xxx"} from Stripe.js One-time setup — all future charges are automatic. Requires GitHub star verification.
    Connector
  • Look up a MITRE ATLAS technique — the AI/ML adversarial attack catalog. ATLAS catalogues TTPs targeting machine learning systems: prompt injection, model evasion, training data poisoning, model theft, etc. Roughly 80% of ATLAS techniques are AI/ML-specific (no ATT&CK bridge); 20% mirror an enterprise ATT&CK technique via attack_reference_id — use that to pivot to D3FEND defenses (d3fend_defense_for_attack) and CVE search. Sub-techniques inherit `tactics` from the parent (inherited_tactics=true flag) when ATLAS upstream leaves them empty. Use this tool when the user asks about AI/ML threats, LLM red-teaming, or adversarial ML; for multiple techniques in one call (e.g. drilling into a case study's techniques_used), prefer bulk_atlas_technique_lookup. Returns 404 when the id is not in the synced ATLAS catalog. Free: 100/hr, Pro: 1000/hr. Returns {technique_id, name, description, tactics, inherited_tactics, maturity (demonstrated|feasible|realized), attack_reference_id, attack_reference_url, subtechnique_of, created_date, modified_date, next_calls}.
    Connector
  • Returns the deployment artifacts for a perspective: the share_url and direct_url for outreach plus ready-to-paste embed snippets (fullpage, widget, popup, slider, float, card) and an SDK reference (script URL, events, URL/brand/theme params, JS API methods, callbacks). Behavior: - Read-only. - Errors when the perspective is not found or you do not have access. - URLs are stable per perspective. Conversations started from these embeds count toward the workspace's quota (preview conversations do not — see perspective_get_preview_link). - Use the snippet returned for that specific perspective rather than hand-rolling URLs. - share_url / direct_url accept these URL params: name, email, returnUrl, plus arbitrary tracking keys (source, campaign, etc.). When to use this tool: - Deploying a perspective to a real site, email, or app surface. - Generating SDK integration code (Next.js layout, raw HTML, popup trigger button, etc.). - Looking up event names or URL parameters the embed accepts. When NOT to use this tool: - Internal smoke testing — use perspective_get_preview_link (preview conversations don't count toward quota). - Inspecting outline / setup — use perspective_get. Typical flow: 1. perspective_create → design 2. perspective_get_preview_link → test 3. perspective_update → refine 4. perspective_get_embed_options → deploy 5. automation_create → (form / lead-capture contexts) wire conversation data to a CRM or backend For coding assistants: after returning embed options, help the user wire the snippet into their app: - Popup / Slider / Float: add the script before `</body>` in HTML, or in `_app.tsx` / `layout.tsx` for React/Next.js. - Widget: place the div where the widget should appear. - Fullpage: use in a dedicated page or iframe container. - Card: use as a preview link in landing pages or emails. For form / lead-capture contexts: after deployment, ask whether they want to set up an automation (automation_create) to forward each completed conversation to their CRM, database, or notification channel. Examples: - Optional URL params on the share link: `email` (pre-fills participant email), `returnUrl` (redirect after the conversation completes), and arbitrary `key=value` pairs for tracking (e.g. `source=email`, `campaign=q4-launch`, `user_id=...`). Embed snippets accept additional appearance params (brand colors, theme) — see the `sdk.parameters` section in the response. - Each perspective has unique URLs — always use the URL returned for that specific perspective.
    Connector