Skip to main content
Glama

toggl_cache_stats

Retrieve cache statistics and performance metrics to monitor Toggl Track integration efficiency and optimize data retrieval.

Instructions

Get cache statistics and performance metrics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler case for 'toggl_cache_stats' that retrieves cache statistics via cache.getStats(), computes the cache hit rate, adds cache_warmed status, and returns a formatted JSON text response.
    case 'toggl_cache_stats': {
      const stats = cache.getStats();
      const hitRate = stats.hits + stats.misses > 0
        ? Math.round((stats.hits / (stats.hits + stats.misses)) * 100)
        : 0;
      
      return {
        content: [{
          type: 'text',
          text: JSON.stringify({ 
            ...stats,
            hit_rate: `${hitRate}%`,
            cache_warmed: cacheWarmed
          }, null, 2)
        }]
      };
    }
  • src/index.ts:366-373 (registration)
    Tool registration entry in the tools array used for ListTools, defining name, description, and input schema (no required parameters).
      name: 'toggl_cache_stats',
      description: 'Get cache statistics and performance metrics',
      inputSchema: {
        type: 'object',
        properties: {},
        required: []
      },
    },
  • CacheManager.getStats() method providing the core statistics (sizes of cached maps and hit/miss counts) that power the toggl_cache_stats tool.
    getStats(): CacheStats {
      return {
        workspaces: this.workspaces.size,
        projects: this.projects.size,
        clients: this.clients.size,
        tasks: this.tasks.size,
        users: this.users.size,
        tags: this.tags.size,
        hits: this.stats.hits,
        misses: this.stats.misses,
        lastReset: this.stats.lastReset
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but fails to describe key behavioral traits like whether this is a read-only operation, what specific metrics are returned, potential performance impacts, or any rate limits. This leaves significant gaps for an agent to understand how to interact with it effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that efficiently conveys the core purpose without any wasted words. It is front-loaded and appropriately sized for a simple tool, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally complete. It states what the tool does but lacks details on behavioral aspects and usage context, which are needed for a richer understanding. Without annotations or output schema, the description should do more to compensate, but it remains adequate for basic comprehension.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description adds no parameter information, which is appropriate here, but it doesn't compensate for any gaps since there are none. A baseline of 4 is given as it adequately handles the zero-parameter case without misleading or redundant details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('cache statistics and performance metrics'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'toggl_clear_cache' or 'toggl_warm_cache' which also relate to cache operations, missing an opportunity for sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention context such as monitoring cache health, troubleshooting performance issues, or comparing with other cache-related tools in the sibling list, leaving the agent to infer usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/verygoodplugins/mcp-toggl'

If you have feedback or need assistance with the MCP directory API, please join our Discord server