Skip to main content
Glama

search_memory

Search stored memories using BM25 full-text search, weighted by importance and recency. Optionally filter by tags to narrow results.

Instructions

Search memories using BM25 full-text search with importance and recency weighting. Supports optional tag filtering to narrow scope.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query — natural language or keywords
limitNoMax results to return (default: 5)
tagsNoOnly search memories with these tags

Implementation Reference

  • The 'search_memory' tool handler — parses query/limit/tags arguments, calls the BM25 search function, touches accessed memories, and formats results.
    case 'search_memory': {
      const query = String(a['query'] ?? '').trim();
      if (!query) return err('query is required');
      const limit = a['limit'] !== undefined ? Math.max(1, Number(a['limit'])) : 5;
      const tags = Array.isArray(a['tags']) ? (a['tags'] as string[]) : undefined;
    
      const results = search(store.all(), query, { limit, tags });
      if (results.length === 0) {
        return ok(`No memories found matching "${query}".`);
      }
    
      results.forEach(r => store.touch(r.memory.key));
    
      const lines = results.map((r, i) => {
        const m = r.memory;
        const tagStr = m.tags.length ? `[${m.tags.join(', ')}]` : '';
        return `${i + 1}. ${m.key}  (importance: ${m.importance}/10${tagStr ? '  ' + tagStr : ''})\n   ${m.content}`;
      });
    
      return ok(`Found ${results.length} result${results.length !== 1 ? 's' : ''} for "${query}":\n\n${lines.join('\n\n')}`);
    }
  • The 'search_memory' tool registration with inputSchema defining query (required string), limit (optional number), and tags (optional array of strings).
    name: 'search_memory',
    description:
      'Search memories using BM25 full-text search with importance and recency weighting. ' +
      'Supports optional tag filtering to narrow scope.',
    inputSchema: {
      type: 'object',
      properties: {
        query: { type: 'string', description: 'Search query — natural language or keywords' },
        limit: { type: 'number', description: 'Max results to return (default: 5)' },
        tags: {
          type: 'array',
          items: { type: 'string' },
          description: 'Only search memories with these tags',
        },
      },
      required: ['query'],
    },
  • src/index.ts:54-71 (registration)
    Tool name 'search_memory' registered in the ListToolsRequestSchema handler, exposing it as an MCP tool.
    {
      name: 'search_memory',
      description:
        'Search memories using BM25 full-text search with importance and recency weighting. ' +
        'Supports optional tag filtering to narrow scope.',
      inputSchema: {
        type: 'object',
        properties: {
          query: { type: 'string', description: 'Search query — natural language or keywords' },
          limit: { type: 'number', description: 'Max results to return (default: 5)' },
          tags: {
            type: 'array',
            items: { type: 'string' },
            description: 'Only search memories with these tags',
          },
        },
        required: ['query'],
      },
  • The core BM25 search function used by search_memory — performs full-text scoring with importance weighting, recency decay, and tag filtering.
    export function search(
      memories: Memory[],
      query: string,
      opts: { limit?: number; tags?: string[] } = {}
    ): SearchResult[] {
      const { limit = 10, tags } = opts;
    
      const candidates = tags?.length
        ? memories.filter(m => tags.some(t => m.tags.includes(t)))
        : memories;
    
      if (candidates.length === 0) return [];
    
      const queryTerms = tokenize(query);
      if (queryTerms.length === 0) return [];
    
      // Pre-tokenize all documents
      const tokenized = candidates.map(m => ({ m, terms: tokenize(buildDocText(m)) }));
    
      // Compute document frequency for IDF
      const N = candidates.length;
      const df = new Map<string, number>();
      queryTerms.forEach(qt => {
        const count = tokenized.filter(({ terms }) => terms.includes(qt)).length;
        df.set(qt, count);
      });
    
      const avgLen = tokenized.reduce((a, { terms }) => a + terms.length, 0) / N;
      const k1 = 1.5;
      const b = 0.75;
    
      const scored = tokenized.map(({ m, terms }) => {
        const tf = new Map<string, number>();
        terms.forEach(t => tf.set(t, (tf.get(t) ?? 0) + 1));
    
        let bm25 = 0;
        for (const qt of queryTerms) {
          const freq = tf.get(qt) ?? 0;
          if (freq === 0) continue;
          const n = df.get(qt) ?? 0;
          const idf = Math.log((N - n + 0.5) / (n + 0.5) + 1);
          const tfNorm = (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * (terms.length / avgLen)));
          bm25 += idf * tfNorm;
        }
    
        // Exact key match boost
        const exactBoost = m.key.toLowerCase().includes(query.toLowerCase()) ? 2 : 0;
        // Tag exact match boost
        const tagBoost = m.tags.some(t => queryTerms.includes(t)) ? 1 : 0;
        // Importance weight: shifts score ±25%
        const importanceW = 1 + (m.importance - 5) * 0.05;
        // Gentle recency decay (half-life ~200 days)
        const daysSince = (Date.now() - new Date(m.updatedAt).getTime()) / 86_400_000;
        const recency = Math.exp(-daysSince * 0.003);
    
        const score = (bm25 + exactBoost + tagBoost) * importanceW * recency;
        const matchType: SearchResult['matchType'] = exactBoost > 0 ? 'exact' : 'keyword';
        return { memory: m, score, matchType };
      });
    
      return scored
        .filter(r => r.score > 0)
        .sort((a, b) => b.score - a.score)
        .slice(0, limit);
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully bears the burden of transparency. It discloses the BM25 algorithm and importance/recency weighting, which are behavioral details beyond the schema. However, it omits potential limitations like pagination or result format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences, front-loaded with the core action, and no extraneous information. Every phrase earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could be more complete by mentioning return format (e.g., relevance score) or error behavior. It adequately covers the main purpose and tag filtering but lacks guidance on handling edge cases or large result sets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so baseline is 3. The description adds context for tag filtering ('Supports optional tag filtering to narrow scope') but does not significantly enhance meaning beyond the schema's parameter descriptions. The query parameter description is adequate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search memories using BM25 full-text search with importance and recency weighting', providing a specific verb and resource. It distinguishes search from listing or retrieval tools but does not explicitly differentiate from siblings like 'list_memories' or 'get_relevant_context'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for content-based search but offers no explicit guidance on when to use this tool versus alternatives like 'list_memories' or 'get_relevant_context'. No when-not-to-use or alternative mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AIsofialuz/agent-memory-hub'

If you have feedback or need assistance with the MCP directory API, please join our Discord server