Skip to main content
Glama

memory_hybrid_search

Combine keyword and semantic search using Reciprocal Rank Fusion to retrieve memories with improved accuracy and relevance.

Instructions

Perform a hybrid search combining keyword (FTS) and semantic (vector) search.

Uses Reciprocal Rank Fusion (RRF) to merge results from both search methods, providing better results than either method alone.

Args: query: Search query text semantic_weight: Weight for semantic results (0-1). Higher values favor semantic similarity. Keyword weight = 1 - semantic_weight. Default: 0.6 (60% semantic, 40% keyword) top_k: Maximum number of results to return (default: 10) min_score: Minimum combined score threshold (default: 0.0) metadata_filters: Optional metadata filters date_from: Optional date filter (ISO format or relative like "7d", "1m", "1y") date_to: Optional date filter (ISO format or relative) tags_any: Match memories with ANY of these tags (OR logic) tags_all: Match memories with ALL of these tags (AND logic) tags_none: Exclude memories with ANY of these tags (NOT logic)

Returns: Dictionary with count and list of results, each containing score and memory

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
semantic_weightNo
top_kNo
min_scoreNo
metadata_filtersNo
date_fromNo
date_toNo
tags_anyNo
tags_allNo
tags_noneNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and succeeds well. It explains the RRF algorithm, the mathematical relationship between semantic/keyword weights (1 - semantic_weight), date format variations ('7d', '1m'), tag logic (OR/AND/NOT), and return structure. Could improve by mentioning error conditions or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear Args/Returns sections. Front-loaded with core purpose. While lengthy due to inline parameter documentation, this is necessary given zero schema coverage. No redundant or wasted sentences given the complexity constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high complexity (hybrid algorithm, 10 parameters) and zero schema/annotation support, the description is remarkably complete. It covers algorithm mechanics, all parameter semantics with examples, default behaviors, and return value structure. Nothing critical is missing for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to fully compensate. The Args section provides comprehensive semantics for all 10 parameters: ranges (0-1), formats (ISO or relative dates), logic (OR/AND/NOT), and defaults (0.6, 10, 0.0). This fully addresses the schema documentation gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise verb ('Perform') and clearly defines the resource ('hybrid search combining keyword (FTS) and semantic (vector) search'). It effectively distinguishes from sibling tool memory_semantic_search by explicitly stating it merges both methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the benefit ('providing better results than either method alone'), implying when to use it, but lacks explicit guidance on when to choose this over memory_semantic_search or other siblings. No 'when-not' or explicit alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentic-box/memora'

If you have feedback or need assistance with the MCP directory API, please join our Discord server