Skip to main content
Glama

get_library_stats

Retrieve statistics about your notebook library, including total notebooks and usage data, to monitor document collection activity.

Instructions

Get statistics about your notebook library (total notebooks, usage, etc.)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the get_library_stats tool. It calls this.library.getStats() to retrieve library statistics and returns them wrapped in ToolResult format, with proper logging and error handling.
    /**
     * Handle get_library_stats tool
     */
    async handleGetLibraryStats(): Promise<ToolResult<any>> {
      log.info(`🔧 [TOOL] get_library_stats called`);
    
      try {
        const stats = this.library.getStats();
        log.success(`✅ [TOOL] get_library_stats completed`);
        return {
          success: true,
          data: stats,
        };
      } catch (error) {
        const errorMessage = error instanceof Error ? error.message : String(error);
        log.error(`❌ [TOOL] get_library_stats failed: ${errorMessage}`);
        return {
          success: false,
          error: errorMessage,
        };
      }
    }
  • The tool schema definition, specifying the name, description, and empty input schema (no parameters required).
    {
      name: "get_library_stats",
      description: "Get statistics about your notebook library (total notebooks, usage, etc.)",
      inputSchema: {
        type: "object",
        properties: {},
      },
    },
  • src/index.ts:228-230 (registration)
    The switch case in the main tool call handler that dispatches calls to the specific handleGetLibraryStats method.
    case "get_library_stats":
      result = await this.toolHandlers.handleGetLibraryStats();
      break;
  • The supporting getStats method in NotebookLibrary class that computes and returns library statistics: total notebooks, active notebook, most used, total queries, last modified.
    getStats(): LibraryStats {
      const totalQueries = this.library.notebooks.reduce(
        (sum, n) => sum + n.use_count,
        0
      );
    
      const mostUsed = this.library.notebooks.reduce((max, n) =>
        n.use_count > (max?.use_count || 0) ? n : max
      , null as NotebookEntry | null);
    
      return {
        total_notebooks: this.library.notebooks.length,
        active_notebook: this.library.active_notebook_id,
        most_used_notebook: mostUsed?.id || null,
        total_queries: totalQueries,
        last_modified: this.library.last_modified,
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Get statistics' but doesn't specify if this is a read-only operation, requires authentication, has rate limits, or what the output format might be (e.g., aggregated counts vs. detailed logs). The description is minimal and fails to disclose key behavioral traits beyond the basic action, leaving significant gaps for the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Get statistics about your notebook library') and adds brief examples ('total notebooks, usage, etc.'). There's no wasted text, and it's appropriately sized for a simple tool. However, it could be slightly more structured by explicitly stating it's a read operation or listing key stats, keeping it from a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is incomplete. It doesn't explain what 'statistics' entail (e.g., numerical counts, usage trends), whether the output is real-time or cached, or any prerequisites like authentication. For a tool that likely returns data, the lack of output schema means the description should compensate more, but it provides minimal context, leaving the agent with insufficient information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the input schema has 100% description coverage (though empty). The description doesn't need to add parameter details, as there are none to document. It implies no filtering or options (e.g., date ranges or user-specific stats), which aligns with the schema. A baseline of 4 is appropriate since the description doesn't contradict the schema and the absence of parameters is clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('statistics about your notebook library'), and mentions example metrics ('total notebooks, usage, etc.'). It distinguishes itself from siblings like 'get_notebook' or 'list_notebooks' by focusing on aggregated statistics rather than individual items or lists. However, it doesn't explicitly differentiate from 'get_health' (which might overlap with system health vs. library stats), keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. For example, it doesn't specify if this should be used for monitoring library health, reporting metrics, or as a prerequisite for other operations. With siblings like 'get_health' (possibly for system stats) and 'list_notebooks' (for detailed listings), the lack of explicit when/when-not or alternative recommendations leaves the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/inventra/notebooklm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server