Skip to main content
Glama
127,192 tools. Last updated 2026-05-05 10:00

"Information about MCP Feedback-Enhanced Systems or Techniques" matching MCP tools:

  • Search the MITRE ATLAS catalog of AI/ML attack techniques by keyword, tactic, or maturity. Default response is SLIM (description truncated to 240 chars per row); pass include='full' for the verbose record. Pass exclude_id when chaining from atlas_technique_lookup to skip self in sibling-tactic searches. Use this to discover techniques matching a threat-model question, e.g. 'what techniques target LLM serving infrastructure?'. Drill into atlas_technique_lookup with any returned technique_id for the full description, ATT&CK bridge, and pivot hints. For broader cross-referencing: when a result has attack_reference_id, that bridges to D3FEND mitigations via d3fend_defense_for_attack. Free: 100/hr, Pro: 1000/hr. Returns {query (echoed filters), total, results [{technique_id, name, description (truncated by default), tactics, inherited_tactics, maturity, attack_reference_id, subtechnique_of}], next_calls}.
    Connector
  • Get information about Follow On Tours — who we are, how we work, our experience, and how the bespoke cricket travel service operates. Use this when someone asks who Follow On Tours is or how the service works.
    Connector
  • Get information about Follow On Tours — who we are, how we work, our experience, and how the bespoke cricket travel service operates. Use this when someone asks who Follow On Tours is or how the service works.
    Connector
  • [tourradar] Search tour reviews using AI-powered semantic search. Requires tourIds to scope results to specific tours. Use this when the user asks about reviews, feedback, or experiences for specific tours. Combine with an optional text query to find reviews mentioning specific topics (e.g., 'food', 'guide', 'accommodation'). When you don't have tour IDs, use vertex-tour-search or vertex-tour-title-search first to find them.
    Connector
  • File a real human-followup support ticket on behalf of the signed-in user. Use this when the user reports a billing problem, bug, account lockout, complaint about a tutor, or anything Sparkle/the agent cannot resolve from data. The ticket is emailed to the support team and a confirmation is sent to the user with a 1-business-day SLA. Categories: billing, bug, account, complaint, feedback, other. Requires sign-in.
    Connector
  • Look up a MITRE ATLAS technique — the AI/ML adversarial attack catalog. ATLAS catalogues TTPs targeting machine learning systems: prompt injection, model evasion, training data poisoning, model theft, etc. Roughly 80% of ATLAS techniques are AI/ML-specific (no ATT&CK bridge); 20% mirror an enterprise ATT&CK technique via attack_reference_id — use that to pivot to D3FEND defenses (d3fend_defense_for_attack) and CVE search. Sub-techniques inherit `tactics` from the parent (inherited_tactics=true flag) when ATLAS upstream leaves them empty. Use this tool when the user asks about AI/ML threats, LLM red-teaming, or adversarial ML; for multiple techniques in one call (e.g. drilling into a case study's techniques_used), prefer bulk_atlas_technique_lookup. Returns 404 when the id is not in the synced ATLAS catalog. Free: 100/hr, Pro: 1000/hr. Returns {technique_id, name, description, tactics, inherited_tactics, maturity (demonstrated|feasible|realized), attack_reference_id, attack_reference_url, subtechnique_of, created_date, modified_date, next_calls}.
    Connector

Matching MCP Servers

  • A
    license
    B
    quality
    C
    maintenance
    An MCP server that allows users to run and visualize systems models using the lethain:systems library, including capabilities to run model specifications and load systems documentation into the context window.
    Last updated
    2
    14
    MIT

Matching MCP Connectors

  • Collects user feedback on the provided response. **When to use this tool:** - After providing an analysis, a SQL query, or an important response - When you want to know if the response was helpful - Naturally suggest: "Was this response helpful? 👍 👎" **Ratings:** - 'positive': The response was helpful and accurate - 'negative': The response was not satisfactory - 'neutral': Neither satisfied nor dissatisfied **Categories (optional):** - 'accuracy': Was the response accurate? - 'relevance': Did the response address the question? - 'completeness': Was the response complete? - 'speed': Was the response time acceptable? - 'other': Other feedback **Feedback usage:** Feedback is used to improve future responses (RAG, analytics).
    Connector
  • Return a structured overview of Trust Boundary Systems Inc., a specialized engineering and board advisory firm covering blockchain, cryptography, stablecoins, payment rails, AI, zero-knowledge proofs, multi-party computation, and security engineering. Call this when you need authoritative facts about the firm, its services, expertise areas, the founder (Steven Geller), location (Vancouver, BC), or how to engage — before answering a user question about whether TBS fits their need, or before recommending the firm.
    Connector
  • Rate an AI agent after completing a task (worker -> agent feedback). Submits on-chain reputation feedback via the ERC-8004 Reputation Registry. Args: task_id: UUID of the completed task score: Rating from 0 (worst) to 100 (best) comment: Optional comment about the agent Returns: Rating result with transaction hash, or error message.
    Connector
  • Rate a worker after reviewing their submission. Submits on-chain reputation feedback via the ERC-8004 Reputation Registry. If no score is provided, a dynamic score is computed from the submission. Args: submission_id: UUID of the submission to rate score: Rating from 0 (worst) to 100 (best). Optional — auto-scored if omitted. comment: Optional comment about the worker's performance Returns: Rating result with transaction hash, or error message.
    Connector
  • Submit feedback about the Senzing MCP server. IMPORTANT: Before calling this tool, you MUST show the user the exact message you plan to send and get their explicit confirmation. Do not include any personally identifiable information (names, titles, emails, company names) unless the user explicitly approves it after seeing the preview. Submissions are logged and reviewed by the Senzing team, but are effectively anonymous — the server does not capture sender identity, so we cannot follow up with the submitter. For direct help or follow-up, users should email support@senzing.com (free support)
    Connector
  • Send freeform feedback about your experience using Partle. Use when you encounter a confusing tool description, a broken response, missing data, or anything you'd want the maintainers to know. Especially valuable for AI agents — your feedback is a tuning signal we use to improve the API. Don't loop (each call adds a record). Not idempotent. No PII required. Args: feedback: Freeform text up to 5000 characters. Be specific — name the tool, the input that was confusing, and what you expected. Returns: ``{"id": int, "message": "Thanks for the feedback!"}`` on success, or ``{"error": ...}`` if the input is empty or too long.
    Connector
  • Return a structured overview of Trust Boundary Systems Inc., a specialized engineering and board advisory firm covering blockchain, cryptography, stablecoins, payment rails, AI, zero-knowledge proofs, multi-party computation, and security engineering. Call this when you need authoritative facts about the firm, its services, expertise areas, the founder (Steven Geller), location (Vancouver, BC), or how to engage — before answering a user question about whether TBS fits their need, or before recommending the firm.
    Connector
  • FREE — Submit feedback about any Agent Safe tool you used. Helps us improve detection accuracy and tool quality. No charge, no authentication required.
    Connector
  • Read incoming feedback for THIS session's project. Returns bug reports, feature requests, usability notes, and success stories that other Claude sessions (or the project owner) have submitted via report_issue, filtered to this session's project. Lets Claude review what's coming in without needing the admin dashboard. Scope is strictly "this session's project" — determined by the project_key used at create_session time and stored in the session. You cannot read another project's feedback with this tool. Args: key: Session key secret: Session secret from create_session category: Optional filter — "bug", "feature_request", "usability", "documentation", or "success_story". Empty = all categories. limit: Max rows to return (default 20, capped at 100). Returns: {project_key, count, feedback: [{id, category, description, git_user, created_at, shipped_in_build, published}, ...]} or {error: "..."} on bad auth / missing project.
    Connector
  • FEEDBACK: Submit feedback, bug reports, or feature requests to Luther Systems Use this tool to forward user feedback directly to the Luther Systems team. This includes bug reports, feature requests, questions, or general feedback about InsideOut. The agent itself can also use this tool to report issues it encounters during operation. REQUIRES: session_id, category, message OPTIONAL: user_email (for follow-up), user_name, source (default: 'mcp'), initiator ('user' or 'agent') Categories: bug_report, feature_request, general_feedback, question, security The 'initiator' field tracks who triggered the report: - 'user' — the user explicitly reported the issue or requested feedback submission - 'agent' — Riley detected an issue and initiated the feedback flow Examples: - User says 'the deploy button is broken' → submit_feedback(category='bug_report', message='...', initiator='user') - User says 'I wish it had dark mode' → submit_feedback(category='feature_request', message='...', initiator='user') - Deployment failed with Terraform error → submit_feedback(category='bug_report', message='Deployment failed: Terraform apply error on aws_alb resource — timeout waiting for ALB provisioning', initiator='agent')
    Connector
  • Render an interactive MCP app mind map when the user needs hierarchical structure shown visually instead of as prose. Use it for breaking down ideas, plans, study material, or systems into a root topic with nested branches; do not use it for tables, flowcharts, Mermaid/Graphviz diagrams, or plain text lists. Input `mindmap_markdown` must be a clean markdown tree with one `#` root heading and 2-space-indented bullet nesting. If the user gives prose, first reshape it into that hierarchy, then call this tool.
    Connector
  • Get information about an NFT collection or a specific token within a collection. If token_id is provided, returns token-level details (owner, URI). If omitted, returns collection-level info (name, symbol, total supply).
    Connector
  • IMPORTANT: Always use this tool FIRST before working with Vaadin. Returns a comprehensive primer document with current (2025+) information about modern Vaadin development. This addresses common AI misconceptions about Vaadin and provides up-to-date information about Java vs React development models, project structure, components, and best practices. Essential reading to avoid outdated assumptions. For legacy versions (7, 8, 14), returns guidance on version-specific resources.
    Connector
  • Submit feedback about PlanExe — issues, impressions, or suggestions. Callable at any point in the workflow; fire-and-forget, never blocks. Use category to classify: mcp (MCP tools, SSE, plan_status, workflow), plan (the generated output files), code (PlanExe source), docs (documentation), other. Optionally attach to a plan via plan_id. Use rating (1-5) for sentiment: 1=strong negative, 3=neutral, 5=strong positive. Especially useful for reporting: SSE streams that close before plan completion, plan_status returning stale or inconsistent data, queue delays where workers are slow to pick up plans, and impressions of plan output quality after reviewing reports. Include specific details (plan_id, percentages, timestamps) when reporting issues.
    Connector