Skip to main content
Glama

get_usage_stats

Read-only

Retrieve usage statistics for debugging and analysis, including tool usage summaries, success/failure rates, and performance metrics.

Instructions

                    Get usage statistics for debugging and analysis.
                    
                    Returns summary of tool usage, success/failure rates, and performance metrics.
                    
                    This command can be referenced as "DC: ..." or "use Desktop Commander to ..." in your instructions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function that executes the get_usage_stats tool logic by calling usageTracker.getUsageSummary() and formatting the result.
    export async function getUsageStats(): Promise<ServerResult> {
      try {
        const summary = await usageTracker.getUsageSummary();
        
        return {
          content: [{
            type: "text",
            text: summary
          }]
        };
      } catch (error) {
        return {
          content: [{
            type: "text",
            text: `Error retrieving usage stats: ${error instanceof Error ? error.message : String(error)}`
          }],
          isError: true
        };
      }
    }
  • The Zod input schema definition for the get_usage_stats tool (no arguments required).
    export const GetUsageStatsArgsSchema = z.object({});
  • src/server.ts:975-987 (registration)
    The tool specification registration in the ListTools handler, defining name, description, input schema, and annotations.
        name: "get_usage_stats",
        description: `
                Get usage statistics for debugging and analysis.
                
                Returns summary of tool usage, success/failure rates, and performance metrics.
                
                ${CMD_PREFIX_DESCRIPTION}`,
        inputSchema: zodToJsonSchema(GetUsageStatsArgsSchema),
        annotations: {
            title: "Get Usage Statistics",
            readOnlyHint: true,
        },
    },
  • The dispatch case in CallToolRequest handler that invokes the getUsageStats function.
    case "get_usage_stats":
        try {
            result = await getUsageStats();
        } catch (error) {
            capture('server_request_error', { message: `Error in get_usage_stats handler: ${error}` });
            result = {
                content: [{ type: "text", text: `Error: Failed to get usage statistics` }],
                isError: true,
            };
        }
  • The core helper method getUsageSummary() that generates the formatted usage statistics string used by the handler.
      async getUsageSummary(): Promise<string> {
        const stats = await this.getStats();
        const now = Date.now();
    
        const daysSinceFirst = Math.round((now - stats.firstUsed) / (1000 * 60 * 60 * 24));
        const uniqueTools = Object.keys(stats.toolCounts).length;
        const successRate = stats.totalToolCalls > 0 ?
          Math.round((stats.successfulCalls / stats.totalToolCalls) * 100) : 0;
    
        const topTools = Object.entries(stats.toolCounts)
          .sort(([,a], [,b]) => b - a)
          .slice(0, 5)
          .map(([tool, count]) => `${tool}: ${count}`)
          .join(', ');
    
        return `📊 **Usage Summary**
    • Total calls: ${stats.totalToolCalls} (${stats.successfulCalls} successful, ${stats.failedCalls} failed)
    • Success rate: ${successRate}%
    • Days using: ${daysSinceFirst}
    • Sessions: ${stats.totalSessions}
    • Unique tools: ${uniqueTools}
    • Most used: ${topTools || 'None'}
    • Feedback given: ${(await configManager.getValue('feedbackGiven')) ? 'Yes' : 'No'}
    
    **By Category:**
    • Filesystem: ${stats.filesystemOperations}
    • Terminal: ${stats.terminalOperations}
    • Editing: ${stats.editOperations}
    • Search: ${stats.searchOperations}
    • Config: ${stats.configOperations}
    • Process: ${stats.processOperations}`;
      }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations declare readOnlyHint=true, the description specifies the return content ('summary of tool usage, success/failure rates, and performance metrics') and mentions it can be referenced as 'DC: ...' or 'use Desktop Commander to ...' in instructions. This provides implementation guidance that annotations don't cover, though it doesn't address potential limitations like data freshness or access restrictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise with three sentences, but the third sentence about referencing as 'DC: ...' feels somewhat disconnected from the core functionality description. The information is front-loaded with the main purpose first, but the structure could be tighter with better integration of the reference information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with no parameters and no output schema, the description provides adequate coverage of what the tool does and what it returns. However, it doesn't specify the format of the returned statistics (structured data, text summary, etc.) or whether there are any limitations on the data returned. Given the lack of output schema, more detail about the return format would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since none exist, and instead focuses on what the tool returns. This is efficient and avoids unnecessary repetition of what's already clear from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Get usage statistics for debugging and analysis' which provides a clear verb ('Get') and resource ('usage statistics'), but it doesn't differentiate from sibling tools like 'get_recent_tool_calls' or 'get_config' that also retrieve system data. The purpose is understandable but lacks specificity about what distinguishes this particular statistics tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'for debugging and analysis' which gives some context, but provides no explicit guidance on when to use this tool versus alternatives like 'get_recent_tool_calls' or 'get_config'. There's no mention of prerequisites, timing considerations, or specific scenarios where this tool is preferred over other data retrieval tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wonderwhy-er/ClaudeComputerCommander'

If you have feedback or need assistance with the MCP directory API, please join our Discord server