Skip to main content
Glama

get_cached_response

Retrieve paginated slices of cached Meraki API responses using offset and limit to avoid context overflow.

Instructions

Retrieve a paginated slice of a cached response from a file

IMPORTANT: This tool returns paginated data to avoid context overflow. For full data access, use command-line tools: cat | jq

Args: filepath: Path to the cached response file (from _full_response_cached field) offset: Starting index for pagination (default: 0) limit: Maximum number of items to return (default: 10, max: 100)

Examples: get_cached_response(filepath="...", offset=0, limit=10) # First 10 items get_cached_response(filepath="...", offset=10, limit=10) # Next 10 items get_cached_response(filepath="...", offset=0, limit=100) # First 100 items

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filepathYes
offsetNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The @mcp.tool() decorated async function 'get_cached_response' that implements the tool logic. It loads a cached response from a file, validates the filepath is within the cache directory, and returns paginated data (with offset/limit). For list data it slices results and provides pagination hints; for non-list data it checks token size and returns data or a warning.
    @mcp.tool()
    async def get_cached_response(filepath: str, offset: int = 0, limit: int = 10) -> str:
        """
        Retrieve a paginated slice of a cached response from a file
    
        IMPORTANT: This tool returns paginated data to avoid context overflow.
        For full data access, use command-line tools: cat <filepath> | jq
    
        Args:
            filepath: Path to the cached response file (from _full_response_cached field)
            offset: Starting index for pagination (default: 0)
            limit: Maximum number of items to return (default: 10, max: 100)
    
        Examples:
            get_cached_response(filepath="...", offset=0, limit=10)   # First 10 items
            get_cached_response(filepath="...", offset=10, limit=10)  # Next 10 items
            get_cached_response(filepath="...", offset=0, limit=100)  # First 100 items
        """
        try:
            # Enforce maximum limit
            if limit > 100:
                limit = 100
    
            # Validate path is inside cache directory before any file access
            try:
                _validate_cache_filepath(filepath)
            except ValueError as e:
                return json.dumps({
                    "error": "Invalid filepath",
                    "message": str(e)
                }, indent=2)
    
            data = load_response_from_file(filepath)
            if data is None:
                return json.dumps({
                    "error": "Could not load cached response",
                    "filepath": filepath
                }, indent=2)
    
            # Handle list pagination
            if isinstance(data, list):
                total_items = len(data)
                paginated_data = data[offset:offset + limit]
    
                return json.dumps({
                    "_paginated": True,
                    "_total_items": total_items,
                    "_offset": offset,
                    "_limit": limit,
                    "_returned_items": len(paginated_data),
                    "_has_more": (offset + limit) < total_items,
                    "_next_offset": offset + limit if (offset + limit) < total_items else None,
                    "_hints": {
                        "next_page": f"get_cached_response(filepath='{filepath}', offset={offset + limit}, limit={limit})" if (offset + limit) < total_items else "No more pages",
                        "full_data_cli": f"cat {filepath} | jq '.data'",
                        "search_cli": f"cat {filepath} | jq '.data[] | select(.field == \"value\")'",
                        "count_cli": f"cat {filepath} | jq '.data | length'"
                    },
                    "data": paginated_data
                }, indent=2)
            else:
                # Non-list data - check size and potentially truncate
                data_json = json.dumps(data)
                estimated_tokens = estimate_token_count(data_json)
    
                if estimated_tokens > MAX_RESPONSE_TOKENS:
                    return json.dumps({
                        "_warning": "Response too large for MCP context",
                        "_estimated_tokens": estimated_tokens,
                        "_max_allowed_tokens": MAX_RESPONSE_TOKENS,
                        "_recommendation": "Use command-line tools to access this data",
                        "_hints": {
                            "view_all": f"cat {filepath} | jq '.data'",
                            "pretty_print": f"cat {filepath} | jq '.'",
                            "extract_field": f"cat {filepath} | jq '.data.fieldName'",
                            "search": f"grep 'search-term' {filepath}"
                        },
                        "_preview": str(data)[:500] + "..." if len(str(data)) > 500 else data
                    }, indent=2)
    
                return json.dumps(data, indent=2)
    
        except Exception as e:
            return json.dumps({
                "error": str(e),
                "filepath": filepath
            }, indent=2)
  • Input parameters defined in the function signature: filepath (str, required), offset (int, default 0), limit (int, default 10). The docstring serves as the schema description for the MCP tool.
    async def get_cached_response(filepath: str, offset: int = 0, limit: int = 10) -> str:
        """
        Retrieve a paginated slice of a cached response from a file
    
        IMPORTANT: This tool returns paginated data to avoid context overflow.
        For full data access, use command-line tools: cat <filepath> | jq
    
        Args:
            filepath: Path to the cached response file (from _full_response_cached field)
            offset: Starting index for pagination (default: 0)
            limit: Maximum number of items to return (default: 10, max: 100)
    
        Examples:
            get_cached_response(filepath="...", offset=0, limit=10)   # First 10 items
            get_cached_response(filepath="...", offset=10, limit=10)  # Next 10 items
            get_cached_response(filepath="...", offset=0, limit=100)  # First 100 items
        """
  • The tool is registered via the @mcp.tool() decorator on line 672, which registers it as an MCP tool with the FastMCP server instance.
    @mcp.tool()
  • The _validate_cache_filepath helper function used by get_cached_response to validate that the filepath is inside RESPONSE_CACHE_DIR (prevents path traversal).
    def _validate_cache_filepath(filepath: str) -> str:
        """Resolve filepath and confirm it is inside RESPONSE_CACHE_DIR.
        Returns the resolved absolute path string.
        Raises ValueError if the path escapes the cache directory.
        """
        cache_root = Path(RESPONSE_CACHE_DIR).resolve()
        resolved = Path(filepath).resolve()
        if not str(resolved).startswith(str(cache_root) + os.sep) and resolved != cache_root:
            raise ValueError(
                f"filepath must be inside the cache directory ({cache_root})"
            )
        return str(resolved)
  • The load_response_from_file helper function used by get_cached_response to load cached JSON data from a file.
    def load_response_from_file(filepath: str) -> Any:
        """Load cached response from file"""
        try:
            safe_filepath = _validate_cache_filepath(filepath)
            with open(safe_filepath, 'r') as f:
                cached = json.load(f)
                return cached.get('data')
        except ValueError:
            return None
        except Exception as e:
            return None
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses pagination behavior, offset/limit defaults and max, and context overflow warning. No annotations provided, but description covers key behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise with structured sections (IMPORTANT, Args, Examples). Every sentence provides value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a 3-parameter tool with output schema. Includes examples and warnings, leaving no critical gaps for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds detailed meaning for all three parameters: filepath source, offset starting index, limit max items. Examples further clarify usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Retrieve a paginated slice of a cached response from a file', which is a specific verb+resource. It distinguishes from sibling tools like 'list_cached_responses' and 'cache_clear'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (paginated access) and when not to (full data via CLI tools), providing clear usage guidance and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CiscoDevNet/meraki-magic-mcp-community'

If you have feedback or need assistance with the MCP directory API, please join our Discord server