Skip to main content
Glama
zilliztech

Zilliz MCP Server

Official
by zilliztech

search

Find similar vector embeddings in Zilliz Cloud collections using vector similarity search with optional filtering and result customization.

Instructions

Conduct a vector similarity search with an optional scalar filtering expression.

Args:
    cluster_id: ID of the cluster
    region_id: ID of the cloud region hosting the cluster
    endpoint: The cluster endpoint URL. Can be obtained by calling describe_cluster and using the connect_address field
    collection_name: The name of the collection to which this operation applies
    data: A list of vector embeddings. Zilliz Cloud searches for the most similar vector embeddings to the specified ones
    anns_field: The name of the vector field
    limit: The total number of entities to return (default: 10). The sum of this value and offset should be less than 100
    db_name: The name of the database. Pass explicit dbName or leave empty when cluster is free or serverless
    filter: The filter used to find matches for the search
    offset: The number of records to skip in the search result. The sum of this value and limit should be less than 16,384
    grouping_field: The name of the field that serves as the aggregation criteria
    output_fields: An array of fields to return along with the search results
    metric_type: The name of the metric type that applies to the current search (L2, IP, COSINE)
    search_params: Extra search parameters including radius and range_filter
    partition_names: The name of the partitions to which this operation applies
    consistency_level: The consistency level of the search operation (Strong, Eventually, Bounded)
Returns:
    Dict containing the search results
    Example:
    {
        "code": 0,
        "data": [
            {
                "color": "orange_6781",
                "distance": 1,
                "id": 448300048035776800
            },
            {
                "color": "red_4794", 
                "distance": 0.9353201,
                "id": 448300048035776800
            }
        ]
    }
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cluster_idYes
region_idYes
endpointYes
collection_nameYes
dataYes
anns_fieldYes
limitNo
db_nameNo
filterNo
offsetNo
grouping_fieldNo
output_fieldsNo
metric_typeNo
search_paramsNo
partition_namesNo
consistency_levelNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool as a search operation (implying read-only, non-destructive) and includes an example return structure, which adds context. However, it lacks details on error handling, rate limits, authentication needs, or performance characteristics that would be important for a complex search tool with 16 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, Args, Returns, Example) and uses bullet-like formatting for parameters. However, it's quite lengthy due to the 16 parameter explanations, which is necessary given the complexity. Some redundancy exists (e.g., repeating parameter names in descriptions), but overall it's efficient for the information conveyed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (16 parameters, no annotations, no output schema), the description does an excellent job of explaining parameters and providing a return example. It covers the core functionality thoroughly. Minor gaps include lack of error cases, performance limits, or integration notes with sibling tools, but it's largely complete for a search operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides detailed semantic explanations for all 16 parameters, far beyond the 0% schema description coverage. Each parameter is clearly explained (e.g., 'cluster_id: ID of the cluster,' 'data: A list of vector embeddings...'), including defaults, constraints (e.g., 'sum of this value and offset should be less than 100'), and usage notes (e.g., endpoint can be obtained from describe_cluster). This fully compensates for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Conduct a vector similarity search with an optional scalar filtering expression,' which is specific about the operation (vector similarity search) and includes the optional filtering capability. However, it doesn't explicitly differentiate from sibling tools like 'hybrid_search' or 'query,' which might offer alternative search methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'hybrid_search' or 'query.' It mentions that the endpoint 'can be obtained by calling describe_cluster,' which is a prerequisite but not usage guidance. There's no explicit when/when-not or comparison with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zilliztech/zilliz-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server