get_cached_response
Retrieve paginated slices of cached Meraki API responses using offset and limit to avoid context overflow.
Instructions
Retrieve a paginated slice of a cached response from a file
IMPORTANT: This tool returns paginated data to avoid context overflow. For full data access, use command-line tools: cat | jq
Args: filepath: Path to the cached response file (from _full_response_cached field) offset: Starting index for pagination (default: 0) limit: Maximum number of items to return (default: 10, max: 100)
Examples: get_cached_response(filepath="...", offset=0, limit=10) # First 10 items get_cached_response(filepath="...", offset=10, limit=10) # Next 10 items get_cached_response(filepath="...", offset=0, limit=100) # First 100 items
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| filepath | Yes | ||
| offset | No | ||
| limit | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |
Implementation Reference
- meraki-mcp-dynamic.py:672-758 (handler)The @mcp.tool() decorated async function 'get_cached_response' that implements the tool logic. It loads a cached response from a file, validates the filepath is within the cache directory, and returns paginated data (with offset/limit). For list data it slices results and provides pagination hints; for non-list data it checks token size and returns data or a warning.
@mcp.tool() async def get_cached_response(filepath: str, offset: int = 0, limit: int = 10) -> str: """ Retrieve a paginated slice of a cached response from a file IMPORTANT: This tool returns paginated data to avoid context overflow. For full data access, use command-line tools: cat <filepath> | jq Args: filepath: Path to the cached response file (from _full_response_cached field) offset: Starting index for pagination (default: 0) limit: Maximum number of items to return (default: 10, max: 100) Examples: get_cached_response(filepath="...", offset=0, limit=10) # First 10 items get_cached_response(filepath="...", offset=10, limit=10) # Next 10 items get_cached_response(filepath="...", offset=0, limit=100) # First 100 items """ try: # Enforce maximum limit if limit > 100: limit = 100 # Validate path is inside cache directory before any file access try: _validate_cache_filepath(filepath) except ValueError as e: return json.dumps({ "error": "Invalid filepath", "message": str(e) }, indent=2) data = load_response_from_file(filepath) if data is None: return json.dumps({ "error": "Could not load cached response", "filepath": filepath }, indent=2) # Handle list pagination if isinstance(data, list): total_items = len(data) paginated_data = data[offset:offset + limit] return json.dumps({ "_paginated": True, "_total_items": total_items, "_offset": offset, "_limit": limit, "_returned_items": len(paginated_data), "_has_more": (offset + limit) < total_items, "_next_offset": offset + limit if (offset + limit) < total_items else None, "_hints": { "next_page": f"get_cached_response(filepath='{filepath}', offset={offset + limit}, limit={limit})" if (offset + limit) < total_items else "No more pages", "full_data_cli": f"cat {filepath} | jq '.data'", "search_cli": f"cat {filepath} | jq '.data[] | select(.field == \"value\")'", "count_cli": f"cat {filepath} | jq '.data | length'" }, "data": paginated_data }, indent=2) else: # Non-list data - check size and potentially truncate data_json = json.dumps(data) estimated_tokens = estimate_token_count(data_json) if estimated_tokens > MAX_RESPONSE_TOKENS: return json.dumps({ "_warning": "Response too large for MCP context", "_estimated_tokens": estimated_tokens, "_max_allowed_tokens": MAX_RESPONSE_TOKENS, "_recommendation": "Use command-line tools to access this data", "_hints": { "view_all": f"cat {filepath} | jq '.data'", "pretty_print": f"cat {filepath} | jq '.'", "extract_field": f"cat {filepath} | jq '.data.fieldName'", "search": f"grep 'search-term' {filepath}" }, "_preview": str(data)[:500] + "..." if len(str(data)) > 500 else data }, indent=2) return json.dumps(data, indent=2) except Exception as e: return json.dumps({ "error": str(e), "filepath": filepath }, indent=2) - meraki-mcp-dynamic.py:673-689 (schema)Input parameters defined in the function signature: filepath (str, required), offset (int, default 0), limit (int, default 10). The docstring serves as the schema description for the MCP tool.
async def get_cached_response(filepath: str, offset: int = 0, limit: int = 10) -> str: """ Retrieve a paginated slice of a cached response from a file IMPORTANT: This tool returns paginated data to avoid context overflow. For full data access, use command-line tools: cat <filepath> | jq Args: filepath: Path to the cached response file (from _full_response_cached field) offset: Starting index for pagination (default: 0) limit: Maximum number of items to return (default: 10, max: 100) Examples: get_cached_response(filepath="...", offset=0, limit=10) # First 10 items get_cached_response(filepath="...", offset=10, limit=10) # Next 10 items get_cached_response(filepath="...", offset=0, limit=100) # First 100 items """ - meraki-mcp-dynamic.py:672-672 (registration)The tool is registered via the @mcp.tool() decorator on line 672, which registers it as an MCP tool with the FastMCP server instance.
@mcp.tool() - meraki-mcp-dynamic.py:143-154 (helper)The _validate_cache_filepath helper function used by get_cached_response to validate that the filepath is inside RESPONSE_CACHE_DIR (prevents path traversal).
def _validate_cache_filepath(filepath: str) -> str: """Resolve filepath and confirm it is inside RESPONSE_CACHE_DIR. Returns the resolved absolute path string. Raises ValueError if the path escapes the cache directory. """ cache_root = Path(RESPONSE_CACHE_DIR).resolve() resolved = Path(filepath).resolve() if not str(resolved).startswith(str(cache_root) + os.sep) and resolved != cache_root: raise ValueError( f"filepath must be inside the cache directory ({cache_root})" ) return str(resolved) - meraki-mcp-dynamic.py:156-166 (helper)The load_response_from_file helper function used by get_cached_response to load cached JSON data from a file.
def load_response_from_file(filepath: str) -> Any: """Load cached response from file""" try: safe_filepath = _validate_cache_filepath(filepath) with open(safe_filepath, 'r') as f: cached = json.load(f) return cached.get('data') except ValueError: return None except Exception as e: return None