Skip to main content
Glama
nocoo

MCP Work History Server

by nocoo

log_activity

Records AI tool usage with metrics like tokens, duration, and cost to a daily worklog file for tracking and analysis.

Instructions

Log AI tool activity to a daily worklog file with comprehensive metrics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tool_nameYesName of the AI tool that performed the activity (e.g., 'Warp', 'Claude Code', 'GitHub Copilot')
log_messageYesDetailed log message describing what was accomplished
ai_modelNoAI model used (e.g., 'gemini-2.5-pro', 'claude-3-sonnet', 'gpt-4')
tokens_usedNoTotal tokens consumed in the request (optional)
input_tokensNoInput tokens used (optional)
output_tokensNoOutput tokens generated (optional)
context_lengthNoContext window length used (optional)
duration_msNoDuration of the operation in milliseconds (optional)
cost_usdNoEstimated cost in USD (optional)
successNoWhether the operation was successful (optional, defaults to true)
error_messageNoError message if operation failed (optional)
tagsNoTags to categorize the activity (e.g., ['coding', 'debugging', 'refactoring']) (optional)

Implementation Reference

  • The `handleLogActivity` function implements the core logic of the 'log_activity' tool. It destructures input arguments, validates required fields, generates timestamps, constructs a formatted log entry with optional metadata (tokens, duration, cost, etc.), appends it to a daily Markdown worklog file in the 'logs' directory, and returns a success or error message.
    async handleLogActivity(args) {
      try {
        const { 
          tool_name, 
          log_message,
          ai_model,
          tokens_used,
          input_tokens,
          output_tokens,
          context_length,
          duration_ms,
          cost_usd,
          success = true,
          error_message,
          tags
        } = args;
        
        if (!tool_name || !log_message) {
          throw new Error("tool_name and log_message are required");
        }
    
        const now = new Date();
        const dateStr = now.toISOString().split('T')[0]; // YYYY-MM-DD format
        const timeStr = now.toLocaleTimeString('en-US', { 
          hour12: false, 
          hour: '2-digit', 
          minute: '2-digit' 
        });
    
        const logsDir = path.join(__dirname, '..', 'logs');
        const logFileName = `worklog-${dateStr}.md`;
        const logFilePath = path.join(logsDir, logFileName);
    
        // Ensure logs directory exists
        await fs.mkdir(logsDir, { recursive: true });
    
        let logContent = '';
        let fileExists = false;
    
        try {
          await fs.access(logFilePath);
          fileExists = true;
          logContent = await fs.readFile(logFilePath, 'utf-8');
        } catch (error) {
          // File doesn't exist, create new content
          logContent = `# 📝 Work Log - ${dateStr}\n\n`;
        }
    
        // Build the log entry with optional metadata
        let logEntry = `${timeStr} - ${tool_name}`;
        
        if (ai_model) {
          logEntry += ` (${ai_model})`;
        }
        
        logEntry += `: ${log_message}`;
        
        // Add optional metadata
        const metadata = [];
        
        if (tokens_used !== undefined) {
          metadata.push(`${tokens_used} tokens`);
        } else if (input_tokens !== undefined || output_tokens !== undefined) {
          const inTokens = input_tokens || 0;
          const outTokens = output_tokens || 0;
          metadata.push(`${inTokens + outTokens} tokens (${inTokens}→${outTokens})`);
        }
        
        if (context_length !== undefined) {
          metadata.push(`${context_length}k ctx`);
        }
        
        if (duration_ms !== undefined) {
          const seconds = duration_ms >= 1000 ? `${(duration_ms / 1000).toFixed(1)}s` : `${duration_ms}ms`;
          metadata.push(seconds);
        }
        
        if (cost_usd !== undefined) {
          metadata.push(`$${cost_usd.toFixed(4)}`);
        }
        
        if (!success && error_message) {
          metadata.push(`❌ ${error_message}`);
        }
        
        if (tags && tags.length > 0) {
          metadata.push(`[${tags.join(', ')}]`);
        }
        
        if (metadata.length > 0) {
          logEntry += ` (${metadata.join(' | ')})`;
        }
        
        const statusIcon = success ? '✅' : '❌';
        const newEntry = `- ${statusIcon} ${logEntry}\n`;
        
        // If it's a new file, add it directly. Otherwise, append to existing content
        if (!fileExists) {
          logContent += newEntry;
        } else {
          // Insert at the end of the file
          logContent += newEntry;
        }
    
        await fs.writeFile(logFilePath, logContent, 'utf-8');
    
        return {
          content: [
            {
              type: "text",
              text: `✅ Activity logged: ${logEntry}${fileExists ? '' : ' (new file created)'}`
            }
          ]
        };
    
      } catch (error) {
        return {
          content: [
            {
              type: "text",
              text: `❌ Error logging activity: ${error.message}`
            }
          ],
          isError: true
        };
      }
    }
  • The input schema defines the structure and validation for the 'log_activity' tool parameters, including required fields (tool_name, log_message) and optional metrics like tokens, duration, cost, and tags.
    inputSchema: {
      type: "object",
      properties: {
        tool_name: {
          type: "string",
          description: "Name of the AI tool that performed the activity (e.g., 'Warp', 'Claude Code', 'GitHub Copilot')"
        },
        log_message: {
          type: "string",
          description: "Detailed log message describing what was accomplished"
        },
        ai_model: {
          type: "string",
          description: "AI model used (e.g., 'gemini-2.5-pro', 'claude-3-sonnet', 'gpt-4')"
        },
        tokens_used: {
          type: "number",
          description: "Total tokens consumed in the request (optional)"
        },
        input_tokens: {
          type: "number",
          description: "Input tokens used (optional)"
        },
        output_tokens: {
          type: "number",
          description: "Output tokens generated (optional)"
        },
        context_length: {
          type: "number",
          description: "Context window length used (optional)"
        },
        duration_ms: {
          type: "number",
          description: "Duration of the operation in milliseconds (optional)"
        },
        cost_usd: {
          type: "number",
          description: "Estimated cost in USD (optional)"
        },
        success: {
          type: "boolean",
          description: "Whether the operation was successful (optional, defaults to true)"
        },
        error_message: {
          type: "string",  
          description: "Error message if operation failed (optional)"
        },
        tags: {
          type: "array",
          items: {
            type: "string"
          },
          description: "Tags to categorize the activity (e.g., ['coding', 'debugging', 'refactoring']) (optional)"
        }
      },
      required: ["tool_name", "log_message"]
    }
  • src/index.js:37-97 (registration)
    The 'log_activity' tool is registered in the ListToolsRequestSchema handler by including it in the tools array returned, defining its name, description, and input schema.
    {
      name: "log_activity",
      description: "Log AI tool activity to a daily worklog file with comprehensive metrics",
      inputSchema: {
        type: "object",
        properties: {
          tool_name: {
            type: "string",
            description: "Name of the AI tool that performed the activity (e.g., 'Warp', 'Claude Code', 'GitHub Copilot')"
          },
          log_message: {
            type: "string",
            description: "Detailed log message describing what was accomplished"
          },
          ai_model: {
            type: "string",
            description: "AI model used (e.g., 'gemini-2.5-pro', 'claude-3-sonnet', 'gpt-4')"
          },
          tokens_used: {
            type: "number",
            description: "Total tokens consumed in the request (optional)"
          },
          input_tokens: {
            type: "number",
            description: "Input tokens used (optional)"
          },
          output_tokens: {
            type: "number",
            description: "Output tokens generated (optional)"
          },
          context_length: {
            type: "number",
            description: "Context window length used (optional)"
          },
          duration_ms: {
            type: "number",
            description: "Duration of the operation in milliseconds (optional)"
          },
          cost_usd: {
            type: "number",
            description: "Estimated cost in USD (optional)"
          },
          success: {
            type: "boolean",
            description: "Whether the operation was successful (optional, defaults to true)"
          },
          error_message: {
            type: "string",  
            description: "Error message if operation failed (optional)"
          },
          tags: {
            type: "array",
            items: {
              type: "string"
            },
            description: "Tags to categorize the activity (e.g., ['coding', 'debugging', 'refactoring']) (optional)"
          }
        },
        required: ["tool_name", "log_message"]
      }
    }
  • src/index.js:103-105 (registration)
    In the CallToolRequestSchema handler, the tool is dispatched by checking the name and calling the handleLogActivity method.
    if (request.params.name === "log_activity") {
      return await this.handleLogActivity(request.params.arguments);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'comprehensive metrics' but doesn't specify file location, format, append vs overwrite behavior, permissions needed, rate limits, or error handling. For a logging tool with 12 parameters, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that clearly states the core purpose. It's appropriately sized for the tool's complexity, though it could potentially be more front-loaded with critical behavioral information given the lack of annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a logging tool with 12 parameters, no annotations, and no output schema, the description is insufficient. It doesn't explain what 'comprehensive metrics' means in practice, how the logging integrates with systems, what format the worklog uses, or what happens on failure. The agent would need to guess about important behavioral aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 12 parameters thoroughly. The description adds no specific parameter information beyond the generic 'comprehensive metrics' mention, which doesn't provide additional semantic value beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Log AI tool activity') and destination ('to a daily worklog file with comprehensive metrics'), providing a specific verb+resource combination. However, without sibling tools for comparison, we cannot assess differentiation from alternatives, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus other logging or tracking methods, nor does it mention prerequisites, frequency recommendations, or integration context. It simply states what the tool does without usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nocoo/mcp-work-history'

If you have feedback or need assistance with the MCP directory API, please join our Discord server