Skip to main content
Glama

get_queries_using_primary_index

Retrieve queries using primary indexes from Couchbase's system catalog to identify optimization opportunities by analyzing completed requests.

Instructions

Get queries that use a primary index from the system:completed_requests catalog.

Args:
    limit: Number of queries to return (default: 10)

Returns:
    List of queries that use primary indexes, ordered by result count

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function that executes the tool logic. It runs a SQL++ query on system:completed_requests to find queries using primary scans, ordered by resultCount, with a custom empty message.
    def get_queries_using_primary_index(
        ctx: Context, limit: int = 10
    ) -> list[dict[str, Any]]:
        """Get queries that use a primary index from the system:completed_requests catalog.
    
        Args:
            limit: Number of queries to return (default: 10)
    
        Returns:
            List of queries that use primary indexes, ordered by result count
        """
        query = """
        SELECT *
        FROM system:completed_requests
        WHERE phaseCounts.`primaryScan` IS NOT MISSING
            AND UPPER(statement) NOT LIKE '% SYSTEM:%'
        ORDER BY resultCount DESC
        LIMIT $limit
        """
    
        return _run_query_tool_with_empty_message(
            ctx,
            query,
            limit=limit,
            empty_message=(
                "No queries using the primary index were found in system:completed_requests."
            ),
        )
  • The tool is registered in the FastMCP server by iterating over ALL_TOOLS from src/tools and calling mcp.add_tool(tool) for each.
    mcp = FastMCP(MCP_SERVER_NAME, lifespan=app_lifespan, **config)
    
    # Register all tools
    for tool in ALL_TOOLS:
        mcp.add_tool(tool)
    
    # Run the server
    mcp.run(transport=sdk_transport)  # type: ignore
  • The tool function is included in the ALL_TOOLS list, which is imported and used for registration in mcp_server.py.
        get_queries_not_selective,
        get_queries_not_using_covering_index,
        get_queries_using_primary_index,
        get_queries_with_large_result_count,
        get_queries_with_largest_response_sizes,
        get_longest_running_queries,
        get_most_frequent_queries,
    ]
  • Helper function used by get_queries_using_primary_index (and other similar tools) to run the cluster query and handle empty results with a custom message.
    def _run_query_tool_with_empty_message(
        ctx: Context,
        query: str,
        *,
        limit: int,
        empty_message: str,
        extra_payload: dict[str, Any] | None = None,
        **query_kwargs: Any,
    ) -> list[dict[str, Any]]:
        """Execute a cluster query with a consistent empty-result response."""
        results = run_cluster_query(ctx, query, limit=limit, **query_kwargs)
    
        if results:
            return results
    
        payload: dict[str, Any] = {"message": empty_message, "results": []}
        if extra_payload:
            payload.update(extra_payload)
        return [payload]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the data source ('system:completed_requests catalog') and ordering ('ordered by result count'), but lacks details on permissions, rate limits, pagination, or error handling. For a read operation with no annotations, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with three clear sections: purpose, args, and returns. Each sentence adds value without redundancy, and information is front-loaded with the tool's purpose. There's no wasted text, making it efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (implied by 'Returns' in description), the description doesn't need to detail return values. However, with no annotations and 0% schema description coverage, it partially compensates by explaining the parameter and source catalog. It's adequate for a simple read tool but lacks behavioral context like error handling or performance considerations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter: 'limit: Number of queries to return (default: 10)'. This clarifies the parameter's purpose and default value, which is helpful since schema description coverage is 0%. However, it doesn't specify constraints like minimum/maximum values or format details, leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get queries that use a primary index from the system:completed_requests catalog.' It specifies the verb ('Get'), resource ('queries'), and source ('system:completed_requests catalog'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'get_most_frequent_queries' or 'get_longest_running_queries' beyond mentioning 'primary index' usage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_queries_not_using_covering_index' or 'get_most_frequent_queries', nor does it specify scenarios where this tool is preferred or excluded. The only implied usage is retrieving queries with primary indexes, but without context on alternatives or constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Couchbase-Ecosystem/mcp-server-couchbase'

If you have feedback or need assistance with the MCP directory API, please join our Discord server