Skip to main content
Glama

trace_usage

Analyzes how client code interacts with MCP tools by identifying tool calls and tracking result property access patterns to detect usage patterns.

Instructions

Trace how client code uses MCP tools. Finds callTool() invocations and tracks which properties are accessed on results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
rootDirYesRoot directory of consumer source code
includeNoGlob patterns to include
excludeNoGlob patterns to exclude

Implementation Reference

  • Main handler for the 'trace_usage' MCP tool. Parses input using TraceUsageInput schema, calls traceConsumerUsage from src/trace/index.ts, and formats the result as JSON.
    case 'trace_usage': {
      const input = TraceUsageInput.parse(args);
      log(`Tracing usage in: ${input.rootDir}`);
      
      const usage = await traceConsumerUsage({
        rootDir: input.rootDir,
        include: input.include,
        exclude: input.exclude,
      });
      
      log(`Found ${usage.length} tool calls`);
      
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify({
              success: true,
              count: usage.length,
              usage,
            }, null, 2),
          },
        ],
      };
    }
  • Zod input schema validation for the trace_usage tool.
    const TraceUsageInput = z.object({
      rootDir: z.string().describe('Root directory of consumer/client source code'),
      include: z.array(z.string()).optional().describe('Glob patterns to include'),
      exclude: z.array(z.string()).optional().describe('Glob patterns to exclude'),
    });
  • src/index.ts:154-166 (registration)
    Registration of the 'trace_usage' tool in the ListTools response, including name, description, and inputSchema.
    {
      name: 'trace_usage',
      description: 'Trace how client code uses MCP tools. Finds callTool() invocations and tracks which properties are accessed on results.',
      inputSchema: {
        type: 'object',
        properties: {
          rootDir: { type: 'string', description: 'Root directory of consumer source code' },
          include: { type: 'array', items: { type: 'string' }, description: 'Glob patterns to include' },
          exclude: { type: 'array', items: { type: 'string' }, description: 'Glob patterns to exclude' },
        },
        required: ['rootDir'],
      },
    },
  • Core helper function traceConsumerUsage that delegates to the language-specific parser's traceUsage method.
    export async function traceConsumerUsage(
      options: TracerOptions
    ): Promise<ConsumerSchema[]> {
      // For backward compatibility, default to TypeScript
      const language = options.language || 'typescript';
    
      // Get parser from registry
      if (!hasParser(language)) {
        throw new Error(
          `No parser available for language: ${language}. Make sure to call bootstrapLanguageParsers() at startup.`
        );
      }
    
      const parser = getParser(language);
    
      return parser.traceUsage({
        rootDir: options.rootDir,
        callPatterns: options.callPatterns,
        include: options.include,
        exclude: options.exclude,
      });
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it explains what the tool does (static analysis of code), it doesn't mention performance characteristics, output format, error conditions, or whether it modifies files. For a tool analyzing source code, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - two clear sentences that directly explain the tool's function without any wasted words. It's front-loaded with the core purpose and follows with specific tracking details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, no annotations, and no output schema, the description adequately explains what the tool does but leaves important gaps. The agent knows it traces MCP tool usage but doesn't know what the output looks like, how comprehensive the analysis is, or what behavioral constraints exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's in the schema, but doesn't need to since the schema is comprehensive. This meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('trace', 'finds', 'tracks') and resources ('client code', 'MCP tools', 'callTool() invocations', 'properties accessed on results'). It distinguishes itself from siblings like trace_file by focusing on usage patterns rather than file-level analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like trace_file or other analysis tools. It doesn't mention prerequisites, typical use cases, or exclusions, leaving the agent to infer usage context from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Mnehmos/mnehmos.trace.mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server