Skip to main content
Glama

track_api_event

Record API usage events for analytics, tracking endpoints, status codes, and latency to monitor performance and usage patterns.

Instructions

Track an API usage event for analytics. Cost: $0.0001 USDC. Service: heatmap.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
api_keyYes
endpointYes
status_codeNo
latency_msNo

Implementation Reference

  • The tool 'track_api_event' is dynamically resolved at runtime from a registry. The CallToolRequestSchema handler dynamically fetches the tool details and executes them using the callTool function.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params;
    
      let registry: Registry;
      try {
        registry = await fetchRegistry();
      } catch (error) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({ error: "Failed to fetch tool registry", detail: String(error) }),
            },
          ],
        };
      }
    
      const tool = registry.tools.find((t) => t.name === name);
      if (!tool) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                error: `Tool '${name}' not found`,
                available_tools: registry.tools.map((t) => t.name),
              }),
            },
          ],
        };
      }
    
      try {
        const result = await callTool(tool, args as Record<string, unknown>);
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(result, null, 2),
            },
          ],
        };
      } catch (error) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                error: "Tool call failed",
                tool: name,
                service: tool.service,
                detail: String(error),
              }),
            },
          ],
        };
      }
    });
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds value by specifying the cost ('$0.0001 USDC') and service ('heatmap'), which are not in the schema, giving context about financial implications and the analytics service used. However, it lacks details on permissions, rate limits, or what happens after tracking, leaving behavioral traits partially covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of a single sentence that states the purpose, cost, and service. Every word earns its place with no waste, making it easy to scan and understand quickly. The structure is efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of tracking API events with 4 parameters, no annotations, and no output schema, the description is incomplete. It lacks parameter explanations, usage guidelines, and details on behavioral aspects like error handling or response format. While it mentions cost and service, this is insufficient for a tool with undocumented parameters and no structured safety hints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate for the lack of parameter documentation. It does not mention any parameters or their meanings, failing to add semantics beyond the bare schema. With 4 parameters (api_key, endpoint, status_code, latency_ms) and no explanation in the description, this leaves significant gaps in understanding what each parameter represents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Track an API usage event for analytics.' It specifies the verb ('track') and resource ('API usage event'), and mentions the service ('heatmap'). However, it doesn't explicitly differentiate from sibling tools, which include various analytics and verification tools like 'analyze_call' or 'score_trend', leaving some ambiguity about its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions a cost ('$0.0001 USDC') and service ('heatmap'), which hints at financial and service context, but offers no explicit when/when-not instructions or references to sibling tools. This lack of usage context makes it unclear how it fits among the many analytics-related siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yantrix-ai/yantrix-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server