Skip to main content
Glama

start_activity_log

Initiate activity tracking for AI coding tasks by creating timestamped logs with categorization for debugging, implementation, testing, and other development workflows.

Instructions

Start a new activity log with system timestamp and unique Time ID

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
activityTypeYesType of activity being performed (e.g., 'code_review', 'debugging', 'planning')
descriptionNoDetailed description of the activity
tagsNoTags for categorizing the activity
task_scopeYesScope of the task

Implementation Reference

  • The core handler function in the TimeServer class that creates a new ActivityLog entry with a unique ID, current UTC timestamp, saves it to the database, and returns the log object.
    def start_activity_log(
        self, 
        activity_type: str, 
        task_scope: TaskScope, 
        description: Optional[str] = None,
        tags: Optional[List[str]] = None
    ) -> ActivityLog:
        """Start a new activity log"""
        time_id = self.id_generator.generate_id()
        start_time = datetime.now(ZoneInfo('UTC')).isoformat(timespec="seconds")
        
        log = ActivityLog(
            activityId=time_id,  # Changed from timeId to activityId
            activityType=activity_type,
            task_scope=task_scope,
            description=description,
            tags=tags,
            startTime=start_time,
            status="started"
        )
        
        self.db.add_activity_log(log)
        return log
  • Pydantic BaseModel defining the structure and validation for ActivityLog objects used by the start_activity_log tool.
    class ActivityLog(BaseModel):
        activityId: str  # Changed from timeId for better naming
        activityType: str
        task_scope: TaskScope
        description: Optional[str] = None
        tags: Optional[List[str]] = None
        startTime: str
        endTime: Optional[str] = None
        duration: Optional[str] = None
        durationSeconds: Optional[int] = None
        result: Optional[str] = None
        notes: Optional[str] = None
        status: str  # "started", "completed"
  • JSON schema for input validation of the start_activity_log tool, registered in the list_tools() handler.
    Tool(
        name=TimeTools.START_ACTIVITY_LOG.value,
        description="Start a new activity log with system timestamp and unique Time ID",
        inputSchema={
            "type": "object",
            "properties": {
                "activityType": {
                    "type": "string",
                    "description": "Type of activity being performed (e.g., 'code_review', 'debugging', 'planning')",
                },
                "task_scope": {
                    "type": "string",
                    "enum": [scope.value for scope in TaskScope],
                    "description": "Scope of the task",
                },
                "description": {
                    "type": "string",
                    "description": "Detailed description of the activity",
                },
                "tags": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Tags for categorizing the activity",
                },
            },
            "required": ["activityType", "task_scope"],
        },
    ),
  • Dispatch logic in the internal _execute_tool function that validates arguments and calls the start_activity_log handler.
    case TimeTools.START_ACTIVITY_LOG.value:
        if not all(k in arguments for k in ["activityType", "task_scope"]):
            raise ValueError("Missing required arguments: activityType and task_scope")
    
        result = time_server.start_activity_log(
            arguments["activityType"],
            TaskScope(arguments["task_scope"]),
            arguments.get("description"),
            arguments.get("tags"),
        )
  • Database helper method called by the handler to persist the new ActivityLog to JSON storage.
    def add_activity_log(self, log: ActivityLog):
        """Add a new activity log"""
        self.activity_logs.append(log.model_dump())
        self.save_data()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only mentions outcomes ('system timestamp and unique Time ID'). It fails to disclose critical behavioral traits such as whether this is a write operation, if it requires specific permissions, how errors are handled, or if it triggers side effects, which is inadequate for a tool that likely creates persistent data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and outcomes without redundancy. Every word contributes directly to the tool's purpose, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a tool with 4 parameters that likely performs a write operation. It lacks details on behavioral context, error handling, or return values, leaving significant gaps for an agent to understand full implications.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no additional meaning beyond implying that inputs define the activity being logged, which aligns with schema details but doesn't enhance understanding. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Start a new activity log') and key outcomes ('system timestamp and unique Time ID'), which distinguishes it from sibling tools like 'end_activity_log' or 'update_activity_log'. However, it doesn't explicitly differentiate from 'create_time_reminder', which might share some conceptual overlap in time-related creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'create_time_reminder' or 'update_activity_log'. The description implies initiation of logging but lacks context on prerequisites, typical scenarios, or exclusions, leaving usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/n0zer0d4y/chronos-protocol'

If you have feedback or need assistance with the MCP directory API, please join our Discord server