Skip to main content
Glama
starwind-ui

Starwind UI MCP Server

by starwind-ui

fetch_llm_data

Retrieve LLM data from starwind.dev with built-in rate limiting and caching to manage API usage efficiently.

Instructions

Fetches LLM data from starwind.dev (rate limited to 3 requests per minute, with caching)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fullNoWhether to fetch the full LLM data (defaults to false)

Implementation Reference

  • The main handler function that executes the tool: checks cache, enforces rate limiting, fetches data from starwind.dev APIs if needed, and returns structured response with metadata.
    handler: async (args: LlmDataFetcherArgs) => {
      // Determine which URL to use
      const isFull = args.full === true;
      const url = isFull ? "https://starwind.dev/llms-full.txt" : "https://starwind.dev/llms.txt";
      const cacheKey = `llm_data_${isFull ? "full" : "standard"}`;
      const cacheTtl = isFull ? CACHE_TTL.FULL_LLM_DATA : CACHE_TTL.STANDARD_LLM_DATA;
    
      // Check cache first
      const cachedData = dataCache.get(cacheKey);
      if (cachedData) {
        const cacheInfo = dataCache.getInfo(cacheKey);
    
        return {
          url,
          data: cachedData,
          timestamp: new Date().toISOString(),
          source: "cache",
          cacheInfo: {
            age: cacheInfo?.age + " seconds",
            remainingTtl: cacheInfo?.remainingTtl + " seconds",
          },
          rateLimitInfo: {
            requestsRemaining: rateLimiter.getRemainingCalls(),
            resetAfter: rateLimiter.getResetTimeSeconds() + " seconds",
          },
        };
      }
    
      // If not in cache, check rate limiting
      if (!rateLimiter.canMakeCall()) {
        throw new Error(
          "Rate limit exceeded. Please try again later (limit: 3 requests per minute).",
        );
      }
    
      // Record this call
      rateLimiter.recordCall();
    
      try {
        // Use native fetch
        const response = await fetch(url);
    
        if (!response.ok) {
          throw new Error(`Failed to fetch data: ${response.status} ${response.statusText}`);
        }
    
        const data = await response.text();
    
        // Store in cache
        dataCache.set(cacheKey, data, cacheTtl);
    
        return {
          url,
          data,
          timestamp: new Date().toISOString(),
          source: "network",
          cacheInfo: {
            ttl: cacheTtl + " seconds",
          },
          rateLimitInfo: {
            requestsRemaining: rateLimiter.getRemainingCalls(),
            resetAfter: rateLimiter.getResetTimeSeconds() + " seconds",
          },
        };
      } catch (error: any) {
        throw new Error(`Error fetching LLM data: ${error.message}`);
      }
    },
  • JSON schema defining the tool's input parameters: optional 'full' boolean to choose between standard or full LLM data.
    inputSchema: {
      type: "object",
      properties: {
        full: {
          type: "boolean",
          description: "Whether to fetch the full LLM data (defaults to false)",
        },
      },
      required: [],
    },
  • Registers the fetch_llm_data tool into the central tools Map used for MCP server tool handling and capabilities.
    // Register LLM data fetcher tool
    tools.set(llmDataFetcherTool.name, llmDataFetcherTool);
  • DataCache class providing get/set/getInfo methods with TTL-based expiration, used to cache fetched LLM data.
    class DataCache {
      private cache: Map<string, CacheEntry> = new Map();
    
      /**
       * Get data from cache if available and not expired
       * @param key Cache key
       * @returns Cached data or undefined if not found/expired
       */
      get(key: string): string | undefined {
        const entry = this.cache.get(key);
        if (!entry) {
          return undefined;
        }
    
        // Check if entry has expired
        if (Date.now() > entry.expiresAt) {
          this.cache.delete(key);
          return undefined;
        }
    
        return entry.data;
      }
    
      /**
       * Store data in cache with TTL
       * @param key Cache key
       * @param data Data to cache
       * @param ttlSeconds Time to live in seconds
       */
      set(key: string, data: string, ttlSeconds: number): void {
        const now = Date.now();
        this.cache.set(key, {
          data,
          timestamp: now,
          expiresAt: now + ttlSeconds * 1000,
        });
      }
    
      /**
       * Get information about cache entry
       * @param key Cache key
       * @returns Info about cache entry or undefined if not found
       */
      getInfo(key: string): { age: number; remainingTtl: number } | undefined {
        const entry = this.cache.get(key);
        if (!entry) {
          return undefined;
        }
    
        const now = Date.now();
        return {
          age: Math.floor((now - entry.timestamp) / 1000), // seconds
          remainingTtl: Math.floor((entry.expiresAt - now) / 1000), // seconds
        };
      }
    }
  • RateLimiter class enforcing 3 calls per minute limit, with methods to check allowance, record calls, and provide status info, used to prevent API abuse.
    class RateLimiter {
      private lastCallTimes: number[] = [];
      private maxCallsPerMinute: number;
    
      constructor(maxCallsPerMinute: number = 3) {
        this.maxCallsPerMinute = maxCallsPerMinute;
      }
    
      /**
       * Check if a call can be made based on rate limits
       * @returns true if call is allowed, false if rate limited
       */
      canMakeCall(): boolean {
        const now = Date.now();
        const oneMinuteAgo = now - 60 * 1000;
    
        // Remove timestamps older than one minute
        this.lastCallTimes = this.lastCallTimes.filter((time) => time > oneMinuteAgo);
    
        // Check if we've reached the limit
        return this.lastCallTimes.length < this.maxCallsPerMinute;
      }
    
      /**
       * Record a new call
       */
      recordCall(): void {
        this.lastCallTimes.push(Date.now());
      }
    
      /**
       * Get the maximum number of calls allowed per minute
       */
      getMaxCallsPerMinute(): number {
        return this.maxCallsPerMinute;
      }
    
      /**
       * Get the number of remaining calls allowed in the current minute
       */
      getRemainingCalls(): number {
        return this.maxCallsPerMinute - this.lastCallTimes.length;
      }
    
      /**
       * Get the time in seconds until the rate limit resets
       */
      getResetTimeSeconds(): number {
        if (this.lastCallTimes.length === 0) {
          return 0;
        }
        return Math.ceil(60 - (Date.now() - this.lastCallTimes[0]) / 1000);
      }
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It helpfully adds context about rate limits ('3 requests per minute') and caching behavior, which are important operational constraints not evident from the schema. However, it doesn't describe what 'LLM data' actually contains or the response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single sentence that efficiently communicates the core purpose plus two key behavioral constraints (rate limiting and caching). Every word earns its place with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool with no output schema, the description provides adequate but incomplete context. It covers the basic purpose and some behavioral constraints, but doesn't explain what 'LLM data' actually is or what format it returns. The agent would need to infer or test to understand the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'full' with its type and default. The description doesn't add any meaning about parameters beyond what the schema provides, maintaining the baseline score of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('fetches') and resource ('LLM data from starwind.dev'), providing a specific verb+resource combination. However, it doesn't differentiate this tool from its siblings (like get_documentation or get_package_manager) which also appear to be retrieval operations, so it doesn't achieve the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any specific use cases, prerequisites, or comparisons with sibling tools like get_documentation. The agent must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/starwind-ui/starwind-ui-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server