Skip to main content
Glama

llmkit_local_projects

Read-onlyIdempotent

Track cumulative AI coding costs across all projects and sessions. Monitor spending from 11 AI providers to manage budgets and analyze usage patterns.

Instructions

Cumulative cost across all projects and sessions from all detected AI coding tools, ranked by spend.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectsYes
totalCostUsdYes

Implementation Reference

  • The handler function that retrieves and formats project cost data from AI coding tool adapters.
    export async function handleLocalProjects() {
      const active = await detectAdapters();
      if (active.length === 0) return fail('No AI coding tool data found. Works with Claude Code and Cline.');
    
      const allProjects: LocalProjectSummary[] = [];
      const results = await Promise.allSettled(active.map(a => a.getProjects()));
      for (const r of results) {
        if (r.status === 'fulfilled') allProjects.push(...r.value);
      }
    
      if (allProjects.length === 0) return fail('No project data found.');
    
      allProjects.sort((a, b) => b.totalCost - a.totalCost);
      const totalCost = allProjects.reduce((s, p) => s + p.totalCost, 0);
    
      const lines = [
        'Project Costs (cumulative, all tools)',
        '\u2500'.repeat(25),
        `${allProjects.length} projects, $${totalCost.toFixed(2)} total`,
        '',
      ];
    
      for (const p of allProjects) {
        const tokens = p.totalInputTokens + p.totalOutputTokens;
        lines.push(`${p.project}: $${p.totalCost.toFixed(2)} across ${p.sessionCount} sessions (${p.totalMessages} msgs, ${(tokens / 1000).toFixed(0)}k tokens) [${p.source}]`);
      }
    
      return ok(lines.join('\n'), { projects: allProjects, totalCostUsd: totalCost });
    }
  • The definition and schema registration of the llmkit_local_projects tool.
      name: 'llmkit_local_projects',
      description: 'Cumulative cost across all projects and sessions from all detected AI coding tools, ranked by spend.',
      inputSchema: { type: 'object' as const, properties: {} },
      outputSchema: {
        type: 'object' as const,
        properties: {
          projects: { type: 'array', items: { type: 'object', properties: { source: { type: 'string' }, project: { type: 'string' }, sessionCount: { type: 'number' }, totalCost: { type: 'number' }, totalMessages: { type: 'number' }, topModel: { type: 'string' } } } },
          totalCostUsd: { type: 'number' },
        },
        required: ['projects', 'totalCostUsd'],
      },
      annotations: { title: 'Project Costs', ...HINTS },
    },
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a read-only, non-destructive, idempotent operation with a closed-world scope. The description adds context about what data is retrieved (cumulative costs, ranked by spend) and the source (all detected AI coding tools), which is useful but doesn't disclose additional behavioral traits like rate limits, authentication needs, or data freshness. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that efficiently conveys the core purpose without any fluff. It is front-loaded with key information (cumulative cost, scope, ranking) and avoids redundant or verbose phrasing. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters, rich annotations (read-only, idempotent, etc.), and an output schema exists, the description provides adequate context for a simple query tool. It explains what data is returned (costs ranked by spend) and the scope (all projects/sessions from AI tools). However, it could better differentiate from siblings to enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and schema description coverage is 100% (empty schema). The description doesn't need to explain parameters, but it implicitly confirms no inputs are required by focusing solely on output semantics. This meets the baseline for zero-parameter tools, though it could explicitly state 'no parameters required' for clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to retrieve cumulative cost data across all projects and sessions from AI coding tools, ranked by spend. It specifies the resource (cost data) and scope (all projects/sessions, ranked). However, it doesn't explicitly differentiate from siblings like 'llmkit_cost_query' or 'llmkit_usage_stats', which likely handle similar cost/usage data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, timing considerations, or compare it to sibling tools like 'llmkit_cost_query' or 'llmkit_usage_stats' that might offer overlapping functionality. The user must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/smigolsmigol/llmkit-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server