Skip to main content
Glama
127,264 tools. Last updated 2026-05-05 12:27

"Techniques for Enhancing Memory and Augmenting Cognitive Thinking" matching MCP tools:

  • Search the MITRE ATLAS catalog of AI/ML attack techniques by keyword, tactic, or maturity. Default response is SLIM (description truncated to 240 chars per row); pass include='full' for the verbose record. Pass exclude_id when chaining from atlas_technique_lookup to skip self in sibling-tactic searches. Use this to discover techniques matching a threat-model question, e.g. 'what techniques target LLM serving infrastructure?'. Drill into atlas_technique_lookup with any returned technique_id for the full description, ATT&CK bridge, and pivot hints. For broader cross-referencing: when a result has attack_reference_id, that bridges to D3FEND mitigations via d3fend_defense_for_attack. Free: 100/hr, Pro: 1000/hr. Returns {query (echoed filters), total, results [{technique_id, name, description (truncated by default), tactics, inherited_tactics, maturity, attack_reference_id, subtechnique_of}], next_calls}.
    Connector
  • Look up a MITRE ATLAS technique — the AI/ML adversarial attack catalog. ATLAS catalogues TTPs targeting machine learning systems: prompt injection, model evasion, training data poisoning, model theft, etc. Roughly 80% of ATLAS techniques are AI/ML-specific (no ATT&CK bridge); 20% mirror an enterprise ATT&CK technique via attack_reference_id — use that to pivot to D3FEND defenses (d3fend_defense_for_attack) and CVE search. Sub-techniques inherit `tactics` from the parent (inherited_tactics=true flag) when ATLAS upstream leaves them empty. Use this tool when the user asks about AI/ML threats, LLM red-teaming, or adversarial ML; for multiple techniques in one call (e.g. drilling into a case study's techniques_used), prefer bulk_atlas_technique_lookup. Returns 404 when the id is not in the synced ATLAS catalog. Free: 100/hr, Pro: 1000/hr. Returns {technique_id, name, description, tactics, inherited_tactics, maturity (demonstrated|feasible|realized), attack_reference_id, attack_reference_url, subtechnique_of, created_date, modified_date, next_calls}.
    Connector
  • List available AI models grouped by thinking level (low/medium/high). Shows default models, credit costs, capabilities for each tier. Use this before consult to understand model options.
    Connector
  • Save your cognitive state for handoff to another agent. Include your investigation context: - What session/investigation is this part of? - What role/perspective were you taking? - Who might pick this up next? (another Claude, human, Claude Code?) Reference specific memories that matter: - Key discoveries (with memory IDs or quotes) - Critical evidence memories - Important questions that were raised - Hypotheses that were tested Before saving, organize your thoughts: 1. PROBLEM: What were you investigating? 2. DISCOVERED: What did you learn for certain? (reference the memories) 3. HYPOTHESIS: What do you think is happening? (cite supporting memories) 4. EVIDENCE: What memories support or contradict this? 5. BLOCKED ON: What prevented further progress? 6. NEXT STEPS: What should be investigated next? 7. KEY MEMORIES: Which specific memories are essential for understanding? Example descriptions: "[API Timeout Investigation - 3 hour session] Investigating production API timeouts as code analyst. Found correlation with batch_size=100 due to hardcoded limit in batch_handler.py (see memory: 'MAX_BATCH_SIZE discovery'). Confirmed not Redis connection issue - monitoring showed only 43/200 connections used (memory: 'Redis connection analysis'). Earlier hypothesis about connection pool exhaustion (memory_id: abc-123) was disproven. Key insight came from comparing 99 vs 100 batch behavior (memory: 'batch threshold testing'). Blocked on: need production access to verify fix. Next: Deploy with MAX_BATCH_SIZE=200 to staging first. Essential memories for handoff: 'MAX_BATCH_SIZE discovery', 'Redis monitoring results', 'Production vs staging comparison'. Ready for handoff to SRE team for deployment." "[Memory System Debugging - From Claude Code perspective] Worked on scoring issues where recall wasn't finding recent memories. Discovered RRF scores (0.005-0.016) were below MCP threshold of 0.05 (memory: 'RRF scoring analysis'). Implemented weighted linear fusion to replace RRF (memory: 'fusion algorithm implementation'). Testing showed immediate improvement (memory: 'fusion testing results'). This builds on earlier investigation about recall failures (memory: 'user report of recall issues'). Critical memories for continuation: 'RRF scoring analysis', 'ADR-023 decision', 'fusion testing results'. Next agent should verify scoring with real queries." "[Context Save/Restore Bug Investigation - 4 hour debugging session with user] Started with user noticing list_contexts returned empty despite saved contexts existing. Investigation revealed two critical bugs: (1) list_contexts was using hybrid search for 'checkpoint' word instead of filtering by memory_type (memory: 'hybrid search misuse discovery'), (2) restore_context hardcoded limit of 10 memories despite contexts having 20+ (memory: 'hardcoded limit bug'). Root cause analysis showed save_context grabs 20 most recent memories regardless of relevance - fundamental design flaw (memory: 'save_context design flaw analysis'). EVIDENCE CHAIN: User reported empty list -> checked DB, contexts exist -> examined list_contexts code -> found hybrid search looking for word 'checkpoint' -> tested /memories endpoint with memory_type filter -> confirmed working -> implemented fix using direct endpoint. INSIGHTS: The narrative description is doing 90% of cognitive handoff work. Memories are supporting evidence, not primary carriers of understanding (memory: 'narrative vs memories insight'). This suggests doubling down on narrative richness rather than perfecting memory selection. CORRECTED UNDERSTANDING: Initially thought memories weren't being returned. Actually they were, just wrong ones - recent memories instead of relevant ones (memory: 'memory selection correction'). CRITICAL MEMORIES: 'hybrid search misuse discovery', 'save_context design flaw analysis', 'narrative vs memories insight', '/memories endpoint test results'. NEXT AGENT: Should implement Phase 2 - semantic search for relevant memories within investigation timeframe. Ready for handoff to any Claude agent for implementation." When referencing memories: - **RELIABLE** — Use memory IDs: "memory_id: abc-123" (direct lookup, always works) - **BEST-EFFORT** — Use descriptive phrases: "see memory: 'Redis connection analysis'" (uses search + substring matching, may not resolve if the memory isn't in top results) - Group related memories: "Essential memories: 'X', 'Y', 'Z'" **Prefer memory_id references** whenever you have the UUID. Semantic phrase references are a convenience that works most of the time, but may silently fail to resolve. The response will tell you how many references resolved so you can retry with UUIDs if needed. Args: name: Name for this context checkpoint description: Detailed cognitive handoff description with memory references ctx: MCP context (automatically provided) Returns: Dict with success status, context_id, and memories included
    Connector
  • List available AI models grouped by thinking level (low/medium/high). Shows default models, credit costs, capabilities for each tier. Use this before consult to understand model options.
    Connector

Matching MCP Servers

Matching MCP Connectors

  • Find relevant Smart‑Thinking memories fast. Fetch full entries by ID to get complete context. Spee…

  • AI memory with 56 tools. Knowledge Graph, semantic search, OAuth 2.1 + Magic Link. Free tier.

  • Permanently delete a stored memory by its UUID. This is a hard delete for GDPR right-to-erasure compliance. The memory is removed from both the vector store and the database. This action cannot be undone.
    Connector
  • Generate and save a complete Talent-Augmenting OS profile from assessment data. Call this after talent_assess_score to create the profile file. Takes the computed scores, demographic info, goals, task classifications, and preferences collected during the assessment conversation. Returns the generated profile and saves it to disk.
    Connector
  • Bulk ATLAS technique lookup — retrieve full records for up to 50 techniques in a single request instead of N separate atlas_technique_lookup calls. Designed as the natural follow-up to atlas_case_study_lookup, whose techniques_used array can be passed directly. Each item is the same shape as atlas_technique_lookup, including parent-tactics inheritance for sub-techniques (inherited_tactics=true flag) and per-item next_calls (D3FEND bridge when attack_reference_id present, sibling-technique search by tactic, parent lookup for sub-techniques). Free: 100/hr (1 per item), Pro: 1000/hr. Returns {results [{technique_id, status (ok|not_found|invalid_format), technique, error}], total, successful, failed, partial, summary}.
    Connector
  • Load a Talent-Augmenting OS profile by name. Returns the full profile with expertise map, calibration settings, task classification, and red lines. Use this at the start of every conversation.
    Connector
  • Start a Talent-Augmenting OS onboarding assessment. Returns the full assessment protocol with all questions, behavioural anchors, and instructions for how to run the assessment conversationally. The chatbot uses this to ask questions one at a time, collect answers, then call talent_assess_score and talent_assess_create_profile to compute scores and save the profile. Call this at the beginning of any onboarding conversation.
    Connector
  • Search arXiv for academic papers in computer science, machine learning, AI, physics, and mathematics. Returns paper titles, authors, abstracts, submission dates, and direct PDF download links. Use for researching algorithms, ML techniques, or emerging CS topics.
    Connector
  • Search clinical trials related to a health condition via PubMed. Finds clinical trial publications matching the condition and optional intervention. Returns trial titles, authors, and PubMed IDs. Args: condition: The health condition or disease (e.g. 'type 2 diabetes', 'breast cancer', 'depression'). intervention: Optional treatment or intervention to include in search (e.g. 'metformin', 'cognitive behavioral therapy'). limit: Maximum number of results to return (default 20, max 100).
    Connector
  • Advanced prompt injection detector. Scans text for 50+ known jailbreak techniques: DAN/STAN/DUDE, role-play bypass, system prompt leaks, delimiter injection, safety bypass, indirect injection via documents, base64 smuggling, unicode obfuscation, and chain-of-thought manipulation. Use this BEFORE passing untrusted text to an LLM. No authentication required.
    Connector
  • Return the dossier projection for a city, in the requested cognitive lens. Defaults to the synthesis projection (the multidimensional view that holds all lenses in superposition and names the dialectics). Pass a single-lens value to get the focused cognitive position — useful when the agent is acting on behalf of a user with a specific stake (developer underwriting, investor thesis, attorney precedent search, resident orientation).
    Connector
  • Return the dossier projection for a corridor, in the requested cognitive lens. Same lens enum and default as describe_place. Corridor projections surface cross-municipal dialectics and shared-infrastructure dynamics that no single place dossier captures.
    Connector
  • Search BC curriculum (K-12) for standards, competencies, content items, and assessment resources using full-text search. Returns structured results with source metadata. Args: - query (string): Natural language search query (e.g., 'empathetic design thinking', 'coding and computational thinking') - subject (string, optional): Filter by subject slug (e.g., 'adst', 'science') - grade (integer, optional): Filter by grade level (0=K, 1-12) - content_type (string, optional): Filter by content type ('big_idea', 'competency', 'content_item', 'elaboration', 'assessment', 'all') - limit (integer, optional): Max results (default 10, max 50) Returns: Matching curriculum elements with source type, course, subject, and grade metadata.
    Connector
  • Get the Talent-Augmenting OS calibration settings for a user. Returns a compact JSON block suitable for injecting into any LLM system prompt. Includes friction levels, coaching domains, red lines, and interaction preferences.
    Connector