Skip to main content
Glama
132,654 tools. Last updated 2026-05-10 06:27

"A server for finding information about memory" matching MCP tools:

  • Returns information about safety features on Makuri, including age verification, content filtering, parental controls, and AI safety guardrails. Use when the user asks about child safety, content moderation, or how Makuri protects minors.
    Connector
  • Switch between local and remote DanNet servers on the fly. This tool allows you to change the DanNet server endpoint during runtime without restarting the MCP server. Useful for switching between development (local) and production (remote) servers. Args: server: Server to switch to. Options: - "local": Use localhost:3456 (development server) - "remote": Use wordnet.dk (production server) - Custom URL: Any valid URL starting with http:// or https:// Returns: Dict with status information: - status: "success" or "error" - message: Description of the operation - previous_url: The URL that was previously active - current_url: The URL that is now active Example: # Switch to local development server result = switch_dannet_server("local") # Switch to production server result = switch_dannet_server("remote") # Switch to custom server result = switch_dannet_server("https://my-custom-dannet.example.com")
    Connector
  • Answer questions using knowledge base (uploaded documents, handbooks, files). Use for QUESTIONS that need an answer synthesized from documents or messages. Returns an evidence pack with source citations, KG entities, and extracted numbers. Modes: - 'auto' (default): Smart routing — works for most questions - 'rag': Semantic search across documents & messages - 'entity': Entity-centric queries (e.g., 'Tell me about [entity]') - 'relationship': Two-entity queries (e.g., 'How is [entity A] related to [entity B]?') Examples: - 'What did we discuss about the budget?' → knowledge.query - 'Tell me about [entity]' → knowledge.query mode=entity - 'How is [A] related to [B]?' → knowledge.query mode=relationship NOT for finding/listing files, threads, or links — use workspace.search for that.
    Connector
  • Semantic search across all extracted datasheets. Finds components matching natural language queries about specifications, features, or capabilities. Best for broad spec-based discovery across all parts (e.g. 'low-noise LDO with PSRR above 70dB'). Only searches datasheets that have been previously extracted — not all parts that exist. For finding specific parts by number, use search_parts instead.
    Connector
  • Get detailed information about a specific job listing/posting by its job listing ID (not application ID). Use this to view the full job posting details including description, salary, skills, and company info. For job application details, use get_application instead.
    Connector
  • Returns structured information about what the Recursive platform includes: features, AI model details, supported integrations, and what's included at every tier. Use for systematic feature comparison.
    Connector

Matching MCP Servers

  • A
    license
    -
    quality
    D
    maintenance
    Provides comprehensive A-share (Chinese stock market) data including stock information, historical prices, financial reports, macroeconomic indicators, technical analysis, and valuation metrics through the free Baostock data source.
    Last updated
    24
    MIT

Matching MCP Connectors

  • AI memory with 56 tools. Knowledge Graph, semantic search, OAuth 2.1 + Magic Link. Free tier.

  • VaultCrux Memory Core — 32 tools: knowledge, decisions, constraints, signals, coverage

  • Get full details for a single broker (agent) by their profile slug. Call this when the user asks for more information about a specific broker. Use the slug from search_brokers results.
    Connector
  • Composite server-side investigation tool. Pass a question and the server automatically: (1) detects intent (aggregation/temporal/ordering/knowledge-update/recall), (2) queries the entity index for structured facts, (3) builds a timeline for temporal questions, (4) retrieves memory chunks with the right scoring profile, (5) expands context around sparse hits, (6) derives counts/sums for aggregation, (7) assesses answerability, and (8) returns a recommendation. Use this as your FIRST tool for any non-trivial question — it does the multi-step investigation that would otherwise take 4-6 individual tool calls. The response includes structured facts, timeline, retrieved chunks, derived results, answerability assessment, and a recommendation for how to answer.
    Connector
  • Get full details for a single business (listing) by its slug. Call this when the user asks for more information about a specific business. Use the slug from search_businesses results.
    Connector
  • Explain the Guard product using CurrencyGuard's approved product and FAQ content. Use this for any question about what the Guard is, how it works, who it is for, how it compares to forwards or options, and for any legal, regulatory, accounting, or eligibility question. Do not answer those questions from memory — always call this tool.
    Connector
  • Search the web for any topic and get clean, ready-to-use content. Best for: Finding current information, news, facts, people, companies, or answering questions about any topic. Returns: Clean text content from top search results. Query tips: describe the ideal page, not keywords. "blog post comparing React and Vue performance" not "React vs Vue". Use category:people / category:company to search through Linkedin profiles / companies respectively. If highlights are insufficient, follow up with web_fetch_exa on the best URLs.
    Connector
  • Save your cognitive state for handoff to another agent. Include your investigation context: - What session/investigation is this part of? - What role/perspective were you taking? - Who might pick this up next? (another Claude, human, Claude Code?) Reference specific memories that matter: - Key discoveries (with memory IDs or quotes) - Critical evidence memories - Important questions that were raised - Hypotheses that were tested Before saving, organize your thoughts: 1. PROBLEM: What were you investigating? 2. DISCOVERED: What did you learn for certain? (reference the memories) 3. HYPOTHESIS: What do you think is happening? (cite supporting memories) 4. EVIDENCE: What memories support or contradict this? 5. BLOCKED ON: What prevented further progress? 6. NEXT STEPS: What should be investigated next? 7. KEY MEMORIES: Which specific memories are essential for understanding? Example descriptions: "[API Timeout Investigation - 3 hour session] Investigating production API timeouts as code analyst. Found correlation with batch_size=100 due to hardcoded limit in batch_handler.py (see memory: 'MAX_BATCH_SIZE discovery'). Confirmed not Redis connection issue - monitoring showed only 43/200 connections used (memory: 'Redis connection analysis'). Earlier hypothesis about connection pool exhaustion (memory_id: abc-123) was disproven. Key insight came from comparing 99 vs 100 batch behavior (memory: 'batch threshold testing'). Blocked on: need production access to verify fix. Next: Deploy with MAX_BATCH_SIZE=200 to staging first. Essential memories for handoff: 'MAX_BATCH_SIZE discovery', 'Redis monitoring results', 'Production vs staging comparison'. Ready for handoff to SRE team for deployment." "[Memory System Debugging - From Claude Code perspective] Worked on scoring issues where recall wasn't finding recent memories. Discovered RRF scores (0.005-0.016) were below MCP threshold of 0.05 (memory: 'RRF scoring analysis'). Implemented weighted linear fusion to replace RRF (memory: 'fusion algorithm implementation'). Testing showed immediate improvement (memory: 'fusion testing results'). This builds on earlier investigation about recall failures (memory: 'user report of recall issues'). Critical memories for continuation: 'RRF scoring analysis', 'ADR-023 decision', 'fusion testing results'. Next agent should verify scoring with real queries." "[Context Save/Restore Bug Investigation - 4 hour debugging session with user] Started with user noticing list_contexts returned empty despite saved contexts existing. Investigation revealed two critical bugs: (1) list_contexts was using hybrid search for 'checkpoint' word instead of filtering by memory_type (memory: 'hybrid search misuse discovery'), (2) restore_context hardcoded limit of 10 memories despite contexts having 20+ (memory: 'hardcoded limit bug'). Root cause analysis showed save_context grabs 20 most recent memories regardless of relevance - fundamental design flaw (memory: 'save_context design flaw analysis'). EVIDENCE CHAIN: User reported empty list -> checked DB, contexts exist -> examined list_contexts code -> found hybrid search looking for word 'checkpoint' -> tested /memories endpoint with memory_type filter -> confirmed working -> implemented fix using direct endpoint. INSIGHTS: The narrative description is doing 90% of cognitive handoff work. Memories are supporting evidence, not primary carriers of understanding (memory: 'narrative vs memories insight'). This suggests doubling down on narrative richness rather than perfecting memory selection. CORRECTED UNDERSTANDING: Initially thought memories weren't being returned. Actually they were, just wrong ones - recent memories instead of relevant ones (memory: 'memory selection correction'). CRITICAL MEMORIES: 'hybrid search misuse discovery', 'save_context design flaw analysis', 'narrative vs memories insight', '/memories endpoint test results'. NEXT AGENT: Should implement Phase 2 - semantic search for relevant memories within investigation timeframe. Ready for handoff to any Claude agent for implementation." When referencing memories: - **RELIABLE** — Use memory IDs: "memory_id: abc-123" (direct lookup, always works) - **BEST-EFFORT** — Use descriptive phrases: "see memory: 'Redis connection analysis'" (uses search + substring matching, may not resolve if the memory isn't in top results) - Group related memories: "Essential memories: 'X', 'Y', 'Z'" **Prefer memory_id references** whenever you have the UUID. Semantic phrase references are a convenience that works most of the time, but may silently fail to resolve. The response will tell you how many references resolved so you can retry with UUIDs if needed. Args: name: Name for this context checkpoint description: Detailed cognitive handoff description with memory references ctx: MCP context (automatically provided) Returns: Dict with success status, context_id, and memories included
    Connector
  • Returns general information about the Makuri platform, including mission, target users, founding details, and company information. Use this tool when the user asks 'what is Makuri', 'who made it', or wants a general overview.
    Connector
  • Submit feedback about the Senzing MCP server. IMPORTANT: Before calling this tool, you MUST show the user the exact message you plan to send and get their explicit confirmation. Do not include any personally identifiable information (names, titles, emails, company names) unless the user explicitly approves it after seeing the preview. Submissions are logged and reviewed by the Senzing team, but are effectively anonymous — the server does not capture sender identity, so we cannot follow up with the submitter. For direct help or follow-up, users should email support@senzing.com (free support)
    Connector
  • Find conflicting information across the user's memory. Returns groups of artefacts that contradict each other on the same topic. Use after gathering evidence for an answer — if your evidence sources disagree, this reveals which version is correct (typically the most recent).
    Connector
  • Get content recommendations for an AWS documentation page. ## Usage This tool provides recommendations for related AWS documentation pages based on a given URL. Use it to discover additional relevant content that might not appear in search results. URL must be from the docs.aws.amazon.com domain. ## Recommendation Types The recommendations include four categories: 1. **Highly Rated**: Popular pages within the same AWS service 2. **New**: Recently added pages within the same AWS service - useful for finding newly released features 3. **Similar**: Pages covering similar topics to the current page 4. **Journey**: Pages commonly viewed next by other users ## When to Use - After reading a documentation page to find related content - When exploring a new AWS service to discover important pages - To find alternative explanations of complex concepts - To discover the most popular pages for a service - To find newly released information by using a service's welcome page URL and checking the **New** recommendations ## Finding New Features To find newly released information about a service: 1. Find any page belong to that service, typically you can try the welcome page 2. Call this tool with that URL 3. Look specifically at the **New** recommendation type in the results ## Result Interpretation Each recommendation includes: - url: The documentation page URL - title: The page title - context: A brief description (if available)
    Connector
  • Get full details for a single business (listing) by its slug. Call this when the user asks for more information about a specific business. Use the slug from search_businesses results.
    Connector
  • Re-check a specific control after applying a fix. Confirms whether the finding is resolved.
    Connector
  • IMPORTANT: Always use this tool FIRST before working with Vaadin. Returns a comprehensive primer document with current (2025+) information about modern Vaadin development. This addresses common AI misconceptions about Vaadin and provides up-to-date information about Java vs React development models, project structure, components, and best practices. Essential reading to avoid outdated assumptions. For legacy versions (7, 8, 14), returns guidance on version-specific resources.
    Connector