cache_stats
View HTTP-cache statistics to identify why price data appears stale or to debug rate-limit (429) errors. Returns hit rate, entries, and per-pattern breakdowns.
Instructions
Return current HTTP-cache statistics.
Reach for this when:
The user asks why a price/value looks stale or is identical to a previous query (responses may be served from cache up to the per-endpoint TTL).
You're debugging rate-limit (HTTP 429) errors and want to confirm the cache is doing its job.
The user explicitly asks about cache utilization or hit rate.
Returns: Dict with keys: entries: number of live cached entries max_entries: LRU eviction threshold hits, misses, sets, errors: cumulative counters since process start hit_rate: hits / (hits + misses), 0.0 if no requests yet by_pattern: per-endpoint-pattern breakdown of the same counters
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- coin_mcp/cache.py:268-287 (handler)The `cache_stats` tool handler — an async function decorated with @mcp.tool() that returns HTTP cache statistics by delegating to the get_stats() helper.
@mcp.tool() async def cache_stats() -> dict[str, Any]: """Return current HTTP-cache statistics. Reach for this when: - The user asks why a price/value looks stale or is identical to a previous query (responses may be served from cache up to the per-endpoint TTL). - You're debugging rate-limit (HTTP 429) errors and want to confirm the cache is doing its job. - The user explicitly asks about cache utilization or hit rate. Returns: Dict with keys: entries: number of live cached entries max_entries: LRU eviction threshold hits, misses, sets, errors: cumulative counters since process start hit_rate: hits / (hits + misses), 0.0 if no requests yet by_pattern: per-endpoint-pattern breakdown of the same counters """ return get_stats() - coin_mcp/cache.py:243-259 (helper)The get_stats() helper function that cache_stats delegates to — it reads global counters and pattern-specific counters to assemble the stats dictionary.
def get_stats() -> dict[str, Any]: hits = _global_counters["hits"] misses = _global_counters["misses"] total = hits + misses hit_rate = (hits / total) if total else 0.0 return { "entries": len(_store), "max_entries": MAX_ENTRIES, "hits": hits, "misses": misses, "sets": _global_counters["sets"], "errors": _global_counters["errors"], "hit_rate": round(hit_rate, 4), "by_pattern": { label: dict(counts) for label, counts in _pattern_counters.items() }, } - coin_mcp/cache.py:268-269 (registration)The tool is registered via the @mcp.tool() decorator on the cache_stats async function. The 'mcp' instance is imported from coin_mcp.core.
@mcp.tool() async def cache_stats() -> dict[str, Any]: - coin_mcp/core.py:333-336 (registration)The cache module (containing cache_stats) is imported at the bottom of core.py so its @mcp.tool() decorators run at module load time.
# Import last so the cache module's @mcp.tool() registrations run at module # load. cache imports `mcp` and `DEFAULT_TIMEOUT` from this module, so this # must come AFTER those names exist. from . import cache # noqa: E402,F401 - coin_mcp/cache.py:280-287 (schema)The schema for cache_stats output is defined in the docstring: returns dict with entries, max_entries, hits, misses, sets, errors, hit_rate, and by_pattern.
Dict with keys: entries: number of live cached entries max_entries: LRU eviction threshold hits, misses, sets, errors: cumulative counters since process start hit_rate: hits / (hits + misses), 0.0 if no requests yet by_pattern: per-endpoint-pattern breakdown of the same counters """ return get_stats()