Skip to main content
Glama
transparentlyok

MCP Context Manager

get_usage_stats

View token usage statistics to verify savings from MCP Context Manager's efficient code retrieval, comparing tool usage against full file read costs.

Instructions

View token usage statistics for this session. Shows how many tokens each MCP tool used vs what full file reads would have cost. Use this to verify token savings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
resetNoReset usage stats after displaying. Default: false

Implementation Reference

  • The 'get_usage_stats' tool implementation in the main switch statement of the tool handler. It calculates and reports the token usage statistics for the session.
    case 'get_usage_stats': {
      const resetAfter = (args as any)?.reset || false;
      const elapsed = Math.round((Date.now() - sessionStartTime) / 1000);
      const minutes = Math.floor(elapsed / 60);
      const seconds = elapsed % 60;
    
      let totalMcpTokens = 0;
      let totalFullReadTokens = 0;
      let totalCalls = 0;
      let totalFiles = 0;
    
      let report = `šŸ“Š MCP Token Usage Stats (session: ${minutes}m ${seconds}s)\n`;
      report += `${'─'.repeat(75)}\n`;
      report += `${'Tool'.padEnd(25)} ${'Calls'.padStart(6)} ${'Tokens'.padStart(8)} ${'Avg'.padStart(6)} ${'Files'.padStart(6)} ${'vs Read'.padStart(12)}\n`;
      report += `${'─'.repeat(75)}\n`;
    
      for (const [tool, stats] of Object.entries(usageStats)) {
        totalMcpTokens += stats.totalTokens;
        totalFullReadTokens += stats.estimatedFullReadTokens;
        totalCalls += stats.calls;
        const fileCount = stats.filesReferenced.size;
        totalFiles += fileCount;
    
        const savings = stats.estimatedFullReadTokens > 0
          ? `${Math.round((1 - stats.totalTokens / stats.estimatedFullReadTokens) * 100)}% saved`
          : 'n/a';
    
        report += `${tool.padEnd(25)} ${String(stats.calls).padStart(6)} ${String(stats.totalTokens).padStart(8)} ${String(stats.avgTokensPerCall).padStart(6)} ${String(fileCount).padStart(6)} ${savings.padStart(12)}\n`;
      }
    
      report += `${'─'.repeat(75)}\n`;
      report += `${'TOTAL'.padEnd(25)} ${String(totalCalls).padStart(6)} ${String(totalMcpTokens).padStart(8)} ${''.padStart(6)} ${String(allFilesReferenced.size).padStart(6)}\n`;
    
      report += `\nšŸ“ Unique files touched: ${allFilesReferenced.size}\n`;
    
      if (totalFullReadTokens > 0) {
        const overallSavings = Math.round((1 - totalMcpTokens / totalFullReadTokens) * 100);
        report += `\nšŸ“– If you Read those ${allFilesReferenced.size} files fully: ~${totalFullReadTokens.toLocaleString()} tokens\n`;
        report += `šŸ” MCP returned only relevant parts: ~${totalMcpTokens.toLocaleString()} tokens\n`;
        report += `šŸ’” Saved: ~${(totalFullReadTokens - totalMcpTokens).toLocaleString()} tokens (${overallSavings}%)\n`;
      }
    
      // List the actual files that were referenced
      if (allFilesReferenced.size > 0 && allFilesReferenced.size <= 30) {
        report += `\nšŸ“‚ Files referenced:\n`;
        for (const file of allFilesReferenced) {
          const tokens = getFileTokenCount(file);
          report += `   ${file} (${tokens > 0 ? tokens.toLocaleString() + ' tokens' : 'unknown size'})\n`;
        }
      } else if (allFilesReferenced.size > 30) {
        report += `\nšŸ“‚ Files referenced: ${allFilesReferenced.size} files (too many to list)\n`;
      }
    
      if (resetAfter) {
        for (const key of Object.keys(usageStats)) {
          delete usageStats[key];
        }
        allFilesReferenced.clear();
        sessionStartTime = Date.now();
        report += `\nšŸ”„ Stats reset.`;
      }
    
      return {
        content: [{ type: 'text', text: report }],
      };
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes what the tool does (view usage stats with a comparison) and hints at a reset capability via the parameter, but doesn't cover other behavioral aspects like whether it requires specific permissions, how data is presented, or if it has rate limits. It adds some value but lacks comprehensive behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first sentence states the purpose and scope, and the second provides usage guidance. It's front-loaded with the core functionality and efficiently conveys essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete. It explains the purpose and usage context well. However, without an output schema, it doesn't describe return values (e.g., format of statistics), which is a minor gap for a monitoring tool. It compensates somewhat by specifying the comparison aspect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter ('reset') with its type and default. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining the implications of resetting or how it affects future usage tracking. Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('View') and resource ('token usage statistics for this session'), with precise scope ('how many tokens each MCP tool used vs what full file reads would have cost'). It distinguishes from siblings by focusing on usage metrics rather than code analysis, repository operations, or caching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to verify token savings'), which implies it's for monitoring and validation purposes. However, it doesn't explicitly state when not to use it or name alternatives among siblings, leaving some ambiguity about its exclusive role.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/transparentlyok/mcp-context-manager'

If you have feedback or need assistance with the MCP directory API, please join our Discord server