Skip to main content
Glama

fetch_llm_data

Retrieve LLM data from starwind.dev with a rate limit of 3 requests per minute and optional caching. Supports fetching full data or partial details for efficient integration with Starwind UI MCP Server.

Instructions

Fetches LLM data from starwind.dev (rate limited to 3 requests per minute, with caching)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fullNoWhether to fetch the full LLM data (defaults to false)

Implementation Reference

  • Core execution logic for the fetch_llm_data tool: handles arguments, caching with DataCache, rate limiting with RateLimiter, fetches data from starwind.dev APIs, and returns structured response.
    handler: async (args: LlmDataFetcherArgs) => { // Determine which URL to use const isFull = args.full === true; const url = isFull ? "https://starwind.dev/llms-full.txt" : "https://starwind.dev/llms.txt"; const cacheKey = `llm_data_${isFull ? "full" : "standard"}`; const cacheTtl = isFull ? CACHE_TTL.FULL_LLM_DATA : CACHE_TTL.STANDARD_LLM_DATA; // Check cache first const cachedData = dataCache.get(cacheKey); if (cachedData) { const cacheInfo = dataCache.getInfo(cacheKey); return { url, data: cachedData, timestamp: new Date().toISOString(), source: "cache", cacheInfo: { age: cacheInfo?.age + " seconds", remainingTtl: cacheInfo?.remainingTtl + " seconds", }, rateLimitInfo: { requestsRemaining: rateLimiter.getRemainingCalls(), resetAfter: rateLimiter.getResetTimeSeconds() + " seconds", }, }; } // If not in cache, check rate limiting if (!rateLimiter.canMakeCall()) { throw new Error( "Rate limit exceeded. Please try again later (limit: 3 requests per minute).", ); } // Record this call rateLimiter.recordCall(); try { // Use native fetch const response = await fetch(url); if (!response.ok) { throw new Error(`Failed to fetch data: ${response.status} ${response.statusText}`); } const data = await response.text(); // Store in cache dataCache.set(cacheKey, data, cacheTtl); return { url, data, timestamp: new Date().toISOString(), source: "network", cacheInfo: { ttl: cacheTtl + " seconds", }, rateLimitInfo: { requestsRemaining: rateLimiter.getRemainingCalls(), resetAfter: rateLimiter.getResetTimeSeconds() + " seconds", }, }; } catch (error: any) { throw new Error(`Error fetching LLM data: ${error.message}`); } },
  • Input schema defining the optional 'full' boolean parameter for fetching standard or full LLM data.
    inputSchema: { type: "object", properties: { full: { type: "boolean", description: "Whether to fetch the full LLM data (defaults to false)", }, }, required: [], },
  • Registers the llmDataFetcherTool in the central tools Map used by the MCP server for tool discovery and execution.
    // Register LLM data fetcher tool tools.set(llmDataFetcherTool.name, llmDataFetcherTool);
  • TypeScript interface defining the input arguments for the tool handler.
    export interface LlmDataFetcherArgs { /** Whether to fetch the full LLM data (defaults to false) */ full?: boolean; }
  • src/server.ts:34-34 (registration)
    Calls setupTools to register tool request handlers on the MCP Server instance, enabling tool execution.
    setupTools(server);
  • DataCache class providing get/set/getInfo methods with TTL-based expiration for caching LLM data.
    class DataCache { private cache: Map<string, CacheEntry> = new Map(); /** * Get data from cache if available and not expired * @param key Cache key * @returns Cached data or undefined if not found/expired */ get(key: string): string | undefined { const entry = this.cache.get(key); if (!entry) { return undefined; } // Check if entry has expired if (Date.now() > entry.expiresAt) { this.cache.delete(key); return undefined; } return entry.data; } /** * Store data in cache with TTL * @param key Cache key * @param data Data to cache * @param ttlSeconds Time to live in seconds */ set(key: string, data: string, ttlSeconds: number): void { const now = Date.now(); this.cache.set(key, { data, timestamp: now, expiresAt: now + ttlSeconds * 1000, }); } /** * Get information about cache entry * @param key Cache key * @returns Info about cache entry or undefined if not found */ getInfo(key: string): { age: number; remainingTtl: number } | undefined { const entry = this.cache.get(key); if (!entry) { return undefined; } const now = Date.now(); return { age: Math.floor((now - entry.timestamp) / 1000), // seconds remainingTtl: Math.floor((entry.expiresAt - now) / 1000), // seconds }; } }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/starwind-ui/starwind-ui-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server