Skip to main content
Glama

get_longest_running_queries

Identify and retrieve the longest-running queries from Couchbase's completed_requests catalog to analyze performance bottlenecks and optimize database efficiency.

Instructions

Get the N longest running queries from the system:completed_requests catalog.

Args:
    limit: Number of queries to return (default: 10)

Returns:
    List of queries with their average service time and count

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function implementing the 'get_longest_running_queries' tool. It runs a SQL++ query on system:completed_requests to fetch the longest running queries grouped by statement, ordered by average service time.
    def get_longest_running_queries(ctx: Context, limit: int = 10) -> list[dict[str, Any]]:
        """Get the N longest running queries from the system:completed_requests catalog.
    
        Args:
            limit: Number of queries to return (default: 10)
    
        Returns:
            List of queries with their average service time and count
        """
        query = """
        SELECT statement,
            DURATION_TO_STR(avgServiceTime) AS avgServiceTime,
            COUNT(1) AS queries
        FROM system:completed_requests
        WHERE UPPER(statement) NOT LIKE 'INFER %'
            AND UPPER(statement) NOT LIKE 'CREATE INDEX%'
            AND UPPER(statement) NOT LIKE 'CREATE PRIMARY INDEX%'
            AND UPPER(statement) NOT LIKE '% SYSTEM:%'
        GROUP BY statement
        LETTING avgServiceTime = AVG(STR_TO_DURATION(serviceTime))
        ORDER BY avgServiceTime DESC
        LIMIT $limit
        """
    
        return _run_query_tool_with_empty_message(
            ctx,
            query,
            limit=limit,
            empty_message=(
                "No completed queries were available to calculate longest running queries."
            ),
        )
  • Helper function called by get_longest_running_queries to execute the cluster query and handle empty results with a standard message.
    def _run_query_tool_with_empty_message(
        ctx: Context,
        query: str,
        *,
        limit: int,
        empty_message: str,
        extra_payload: dict[str, Any] | None = None,
        **query_kwargs: Any,
    ) -> list[dict[str, Any]]:
        """Execute a cluster query with a consistent empty-result response."""
        results = run_cluster_query(ctx, query, limit=limit, **query_kwargs)
    
        if results:
            return results
    
        payload: dict[str, Any] = {"message": empty_message, "results": []}
        if extra_payload:
            payload.update(extra_payload)
        return [payload]
  • Registration loop in the MCP server where all tools, including get_longest_running_queries (imported via ALL_TOOLS), are added to the FastMCP server instance.
    # Register all tools
    for tool in ALL_TOOLS:
        mcp.add_tool(tool)
  • Definition of ALL_TOOLS list in tools/__init__.py which includes get_longest_running_queries and is used for bulk registration in mcp_server.py.
    ALL_TOOLS = [
        get_buckets_in_cluster,
        get_server_configuration_status,
        test_cluster_connection,
        get_scopes_and_collections_in_bucket,
        get_collections_in_scope,
        get_scopes_in_bucket,
        get_document_by_id,
        upsert_document_by_id,
        delete_document_by_id,
        get_schema_for_collection,
        run_sql_plus_plus_query,
        get_index_advisor_recommendations,
        list_indexes,
        get_cluster_health_and_services,
        get_queries_not_selective,
        get_queries_not_using_covering_index,
        get_queries_using_primary_index,
        get_queries_with_large_result_count,
        get_queries_with_largest_response_sizes,
        get_longest_running_queries,
        get_most_frequent_queries,
    ]
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool as a read operation ('Get') and specifies the data source ('system:completed_requests catalog'), which is helpful. However, it doesn't mention potential limitations like rate limits, authentication needs, or whether the data is real-time/historical, leaving gaps in behavioral context for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by labeled sections for 'Args' and 'Returns', each containing only essential information. Every sentence earns its place by directly contributing to understanding the tool's function, parameters, and output, with no wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no annotations, but with an output schema), the description is reasonably complete. It explains the purpose, parameter semantics, and return format ('List of queries with their average service time and count'), though it could benefit from more behavioral details like data freshness or access constraints. The presence of an output schema reduces the need to fully describe return values, but some contextual gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter 'limit' by explaining it as 'Number of queries to return' and providing a default value (10), which complements the input schema that only shows the parameter's type and default without description (0% schema coverage). This fully compensates for the schema's lack of parameter descriptions, making the parameter's purpose clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the N longest running queries') and resource ('from the system:completed_requests catalog'), distinguishing it from siblings like 'get_most_frequent_queries' or 'get_queries_with_largest_response_sizes' which focus on different query characteristics. It precisely communicates what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving long-running queries but doesn't explicitly state when to use this tool versus alternatives like 'get_most_frequent_queries' or 'get_queries_with_large_result_count'. It provides context by mentioning the 'completed_requests' catalog, but lacks explicit guidance on exclusions or prerequisites, leaving usage somewhat open to interpretation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Couchbase-Ecosystem/mcp-server-couchbase'

If you have feedback or need assistance with the MCP directory API, please join our Discord server