Skip to main content
Glama

CentralMind/Gateway

README.md378 B
--- title: LRU Cache Plugin --- Implements LRU (Least Recently Used) caching for query responses. ## Type - Wrapper ## Description Caches query responses using an LRU strategy, with configurable cache size and TTL. ## Configuration ```yaml lru_cache: max_size: 1000 # Maximum number of entries in cache ttl: "5m" # Time-to-live for cached entries ```

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/centralmind/gateway'

If you have feedback or need assistance with the MCP directory API, please join our Discord server