Skip to main content
Glama

get_library_stats

Retrieve aggregate statistics about your Lutris game library, including total games, categories, and collection insights from the database.

Instructions

Get aggregate statistics about the Lutris game library

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The actual business logic that queries the database to calculate library statistics.
    export function getLibraryStats(): LibraryStats {
      const db = getDatabase();
    
      const totals = db
        .prepare(
          "SELECT COUNT(*) as total, SUM(CASE WHEN installed = 1 THEN 1 ELSE 0 END) as installed, COALESCE(SUM(playtime), 0) as playtime FROM games"
        )
        .get() as { total: number; installed: number; playtime: number };
    
      const topByPlaytime = db
        .prepare(
          "SELECT name, playtime FROM games WHERE playtime > 0 ORDER BY playtime DESC LIMIT 10"
        )
        .all() as { name: string; playtime: number }[];
    
      const byRunner = db
        .prepare(
          "SELECT COALESCE(runner, 'unknown') as runner, COUNT(*) as count FROM games GROUP BY runner ORDER BY count DESC"
        )
        .all() as { runner: string; count: number }[];
    
      const byPlatform = db
        .prepare(
          "SELECT COALESCE(platform, 'unknown') as platform, COUNT(*) as count FROM games GROUP BY platform ORDER BY count DESC"
        )
        .all() as { platform: string; count: number }[];
    
      const byService = db
        .prepare(
          "SELECT COALESCE(service, 'none') as service, COUNT(*) as count FROM games GROUP BY service ORDER BY count DESC"
        )
        .all() as { service: string; count: number }[];
    
      const recentlyPlayed = db
        .prepare(
          "SELECT name, lastplayed FROM games WHERE lastplayed IS NOT NULL AND lastplayed > 0 ORDER BY lastplayed DESC LIMIT 10"
        )
        .all() as { name: string; lastplayed: number }[];
    
      return {
        total_games: totals.total,
        installed_games: totals.installed,
        total_playtime_hours: Math.round((totals.playtime / 60) * 100) / 100,
        top_games_by_playtime: topByPlaytime,
        games_by_runner: byRunner,
        games_by_platform: byPlatform,
        games_by_service: byService,
        recently_played: recentlyPlayed,
      };
    }
  • MCP tool definition and handler that calls the database query.
    server.tool(
      "get_library_stats",
      "Get aggregate statistics about the Lutris game library",
      {},
      async () => {
        try {
          const stats = getLibraryStats();
          return {
            content: [{ type: "text", text: JSON.stringify(stats, null, 2) }],
          };
        } catch (error) {
          const msg = error instanceof Error ? error.message : String(error);
          return { content: [{ type: "text", text: `Error: ${msg}` }], isError: true };
        }
      }
    );
  • Function to register the stats tools with the MCP server.
    export function registerStatsTools(server: McpServer) {
      server.tool(
        "get_library_stats",
        "Get aggregate statistics about the Lutris game library",
        {},
        async () => {
          try {
            const stats = getLibraryStats();
            return {
              content: [{ type: "text", text: JSON.stringify(stats, null, 2) }],
            };
          } catch (error) {
            const msg = error instanceof Error ? error.message : String(error);
            return { content: [{ type: "text", text: `Error: ${msg}` }], isError: true };
          }
        }
      );
    }
  • Type definition for the library statistics object.
    export interface LibraryStats {
      total_games: number;
      installed_games: number;
      total_playtime_hours: number;
      top_games_by_playtime: { name: string; playtime: number }[];
      games_by_runner: { runner: string; count: number }[];
      games_by_platform: { platform: string; count: number }[];
      games_by_service: { service: string; count: number }[];
      recently_played: { name: string; lastplayed: number }[];
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states what the tool does but lacks behavioral details: it doesn't specify if this is a read-only operation, what permissions are needed, how data is formatted, or any rate limits. For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any fluff. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description is minimal but covers the basic purpose. However, it lacks details on what 'statistics' includes (e.g., counts, averages) and behavioral context, which would help an agent use it effectively. It's adequate but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add param info, but that's acceptable here—it implies no inputs are required, which aligns with the schema. Baseline is 4 for zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('aggregate statistics about the Lutris game library'), making the purpose understandable. However, it doesn't explicitly differentiate this tool from siblings like 'list_games' or 'search_service_games'—it implies aggregate data vs. listing, but could be more specific about what 'statistics' includes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'list_games' and 'search_service_games', it's unclear if this tool is for summaries, counts, or other metrics, and there's no mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Praeses0/lutris-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server