Skip to main content
Glama

ShallowCodeResearch_get_cache_status

Check cache status and statistics to monitor data retrieval efficiency and system performance in research workflows.

Instructions

Get cache status and statistics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • app.py:457-473 (handler)
    Handler function that executes the get_cache_status tool logic, checks for advanced features, and delegates to cache_manager.get_cache_status()
    def get_cache_status() -> Dict[str, Any]: """Get cache status and statistics.""" if not ADVANCED_FEATURES_AVAILABLE: return { "status": "basic_mode", "message": "Cache monitoring not available. Install 'pip install psutil aiohttp' to enable cache statistics.", "cache_info": { "caching_available": False, "recommendation": "Install advanced features for intelligent caching" } } try: from mcp_hub.cache_utils import cache_manager return cache_manager.get_cache_status() except Exception as e: return {"error": f"Cache status failed: {str(e)}"}
  • app.py:1078-1083 (registration)
    Gradio MCP tool registration for get_cache_status_service which calls the handler fn=get_cache_status
    cache_btn.click( fn=get_cache_status, inputs=[], outputs=cache_output, api_name="get_cache_status_service" )
  • CacheManager.get_cache_status method providing file-based cache statistics used by the main handler
    def get_cache_status(self) -> Dict[str, Any]: """Get detailed status information about the cache system.""" try: # Count cache files cache_files = list(self.cache_dir.glob("*.cache")) cache_count = len(cache_files) # Calculate cache directory size total_size = sum(f.stat().st_size for f in cache_files) # Count expired files expired_count = 0 current_time = datetime.now() for cache_file in cache_files: try: with open(cache_file, 'rb') as f: cache_data = pickle.load(f) if current_time > cache_data['expires_at']: expired_count += 1 except Exception: expired_count += 1 # Count corrupted files as expired # Get cache stats return { "status": "healthy", "cache_dir": str(self.cache_dir), "total_files": cache_count, "expired_files": expired_count, "total_size_bytes": total_size, "total_size_mb": round(total_size / (1024 * 1024), 2), "default_ttl_seconds": self.default_ttl, "timestamp": datetime.now().isoformat() } except Exception as e: logger.error(f"Failed to get cache status: {str(e)}") return { "status": "error", "error": str(e), "timestamp": datetime.now().isoformat() }
  • RedisCacheBackend.get_cache_status method for Redis-based cache statistics, used if Redis backend is configured
    def get_cache_status(self) -> Dict[str, Any]: """Get detailed status information about the cache. Returns: Dictionary with cache status information """ try: # Get Redis info info = self.client.info() # Count keys with our prefix pattern = f"{self.key_prefix}*" cursor = 0 key_count = 0 while True: cursor, keys = self.client.scan(cursor, match=pattern, count=100) key_count += len(keys) if cursor == 0: break # Get memory usage memory_used = info.get("used_memory", 0) memory_used_human = info.get("used_memory_human", "0B") return { "status": "healthy", "backend": "redis", "redis_version": info.get("redis_version", "unknown"), "connected_clients": info.get("connected_clients", 0), "total_keys": key_count, "memory_used_bytes": memory_used, "memory_used_human": memory_used_human, "default_ttl_seconds": self.default_ttl, "key_prefix": self.key_prefix, "timestamp": datetime.now().isoformat(), } except Exception as e: logger.error(f"Failed to get cache status: {e}") return { "status": "error", "backend": "redis", "error": str(e), "timestamp": datetime.now().isoformat(), }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeHalwell/gradio-mcp-agent-hack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server