Skip to main content
Glama
zilliztech

Zilliz MCP Server

Official
by zilliztech

hybrid_search

Search vector databases by combining semantic similarity with structured filters, then refine results using ranking strategies to retrieve relevant data.

Instructions

Search for entities based on vector similarity and scalar filtering and rerank the results using a specified strategy.

Args:
    cluster_id: ID of the cluster
    region_id: ID of the cloud region hosting the cluster
    endpoint: The cluster endpoint URL. Can be obtained by calling describe_cluster and using the connect_address field
    collection_name: The name of the collection to which this operation applies
    search_requests: List of search parameters for different vector fields. Each search request should contain:
        - data: A list of vector embeddings
        - annsField: The name of the vector field
        - filter: A boolean expression filter (optional)
        - groupingField: The name of the field that serve as the aggregation criteria (optional)
        - metricType: The metric type (L2, IP, COSINE) (optional)
        - limit: The number of entities to return
        - offset: The number of entities to skip (optional, default: 0)
        - ignoreGrowing: Whether to ignore entities in growing segments (optional, default: false)
        - params: Extra search parameters with radius and range_filter (optional)
    rerank_strategy: The name of the reranking strategy (rrf, weighted)
    rerank_params: Parameters related to the specified strategy (e.g., {"k": 10} for rrf)
    limit: The total number of entities to return. The sum of this value and offset should be less than 16,384
    db_name: The name of the database. Pass explicit dbName or leave empty when cluster is free or serverless
    partition_names: The name of the partitions to which this operation applies
    output_fields: An array of fields to return along with the search results
    consistency_level: The consistency level of the search operation (Strong, Eventually, Bounded)
Returns:
    Dict containing the hybrid search results
    Example:
    {
        "code": 0,
        "cost": 0,
        "data": [
            {
                "book_describe": "book_105",
                "distance": 0.09090909,
                "id": 450519760774180800,
                "user_id": 5,
                "word_count": 105
            }
        ]
    }
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cluster_idYes
region_idYes
endpointYes
collection_nameYes
search_requestsYes
rerank_strategyYes
rerank_paramsYes
limitYes
db_nameNo
partition_namesNo
output_fieldsNo
consistency_levelNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses some behavioral aspects: the limit+offset constraint (<16,384), that endpoint can be obtained from describe_cluster, and that db_name can be empty for free/serverless clusters. However, it doesn't cover important aspects like rate limits, authentication requirements, error conditions, or performance characteristics for a complex 12-parameter search operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately front-loaded with the core purpose, but becomes verbose with detailed parameter documentation and a full example response. While the parameter explanations are valuable given the 0% schema coverage, the structure could be more streamlined. The example response takes significant space but adds concrete value for understanding output format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (12 parameters, nested objects), no annotations, and no output schema, the description provides substantial context. It documents all parameters thoroughly, shows example output, and explains key constraints. The main gaps are lack of sibling tool differentiation and insufficient behavioral context (auth, errors, performance), but overall it's quite complete for such a complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description provides extensive parameter documentation that fully compensates. It explains all 12 parameters with detailed semantics, including optional/required status, examples (e.g., rerank_params {"k": 10}), constraints (limit+offset < 16,384), and relationships between parameters. The search_requests structure is particularly well-documented with all sub-fields explained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs hybrid search with vector similarity, scalar filtering, and reranking. It specifies the verb 'search' and resource 'entities', but doesn't explicitly differentiate from sibling 'search' tool, which appears to be a simpler alternative. The purpose is well-defined but sibling differentiation could be more explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the simpler 'search' sibling or other alternatives like 'query'. It mentions that endpoint 'can be obtained by calling describe_cluster', which is helpful but doesn't constitute comprehensive usage guidelines. No explicit when/when-not recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zilliztech/zilliz-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server