Skip to main content
Glama

add_activity_log

Log agent tool calls to an append-only audit trail for tracking session activities, including inputs, outputs, duration, and status.

Instructions

Log an agent tool call to the activity audit trail (append-only).

Args: session_id: Session identifier. agent_id: Agent that made the call (e.g. oncoteam). tool_name: Name of the tool that was called. input_summary: Brief summary of the input parameters. output_summary: Brief summary of the output. duration_ms: How long the call took in milliseconds. status: Result status (ok, error, timeout). error_message: Error details if status is not ok. tags: JSON array of tags (e.g. '["research","pubmed"]').

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
session_idYes
agent_idYes
tool_nameYes
input_summaryNo
output_summaryNo
duration_msNo
statusNook
error_messageNo
tagsNo[]

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by specifying this is 'append-only' (important behavioral constraint) and describing what gets logged. It doesn't mention authentication requirements, rate limits, or error handling beyond the status parameter, but provides solid operational context for a logging tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with a clear purpose statement followed by well-organized parameter explanations. Every sentence earns its place, with no redundant information. The formatting with 'Args:' section makes it easily scannable while remaining comprehensive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, 3 required), zero schema description coverage, and no annotations, the description provides complete context. It explains the tool's purpose, all parameters with semantics, and behavioral constraints ('append-only'). With an output schema present, it appropriately doesn't explain return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides clear, meaningful explanations for all 9 parameters, including examples (e.g., 'e.g. oncoteam', 'e.g. "[\"research\",\"pubmed\"]"'), default behaviors, and purpose for each field, adding substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Log an agent tool call') and resource ('to the activity audit trail'), with the parenthetical '(append-only)' providing important context about the operation's nature. It distinguishes this from sibling tools like 'search_activity_log' or 'get_activity_stats' by focusing on creation rather than retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the parameter explanations (e.g., 'Agent that made the call'), suggesting this is for recording tool usage after execution. However, it doesn't explicitly state when to use this tool versus alternatives like 'log_conversation' or provide clear prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/peter-fusek/oncofiles'

If you have feedback or need assistance with the MCP directory API, please join our Discord server