Skip to main content
Glama

track_habit

Log activities for specific habits on chosen dates to monitor progress and maintain consistency in personal routines.

Instructions

Track an activity for a specific habit on a given date

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYes
dateYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler method 'track_habit_tool' inside 'HabitTools' that performs the logic for tracking a habit.
    async def track_habit_tool(
        self,
        ctx: ServerContext,  # noqa: ARG002  # Required for MCP tool signature
        id: str,  # noqa: A002  # Required by MCP tool API - habit ID parameter
        date: str,  # Required by MCP tool API - date parameter
    ) -> dict[str, Any]:
        """Track an activity for a specific habit on a given date.
    
        Args:
            ctx: Server context for logging
            id: The ID of the habit to track
            date: The date when the habit was performed in ISO-8601 format (YYYY-MM-DD)
    
        Returns:
            Dict[str, Any]: Success response with confirmation message
    
        Raises:
            ValueError: If date format is invalid
            LunaTaskAuthenticationError: Authentication failed
            LunaTaskNotFoundError: Habit not found
            LunaTaskValidationError: Invalid parameters
            LunaTaskRateLimitError: Rate limit exceeded
            LunaTaskServerError: Server error
            LunaTaskTimeoutError: Request timeout
            LunaTaskNetworkError: Network connectivity error
        """
        # Parse the date string to validate format and convert to date object
        try:
            parsed_date = date_class.fromisoformat(date)
        except ValueError as e:
            logger.exception("Invalid date format provided: %s", date)
            msg = f"Invalid date format: {date}. Expected YYYY-MM-DD format"
            raise ValueError(msg) from e
    
        # Assign to local variable to avoid builtin shadowing in the rest of the method
        habit_id = id
    
        # Call the client method to track the habit
        await self.lunatask_client.track_habit(habit_id, parsed_date)
    
        # Log successful tracking
        logger.info("Successfully tracked habit %s on %s", habit_id, date)
    
        # Return success response
        return {"ok": True, "message": f"Successfully tracked habit {habit_id} on {date}"}
  • The '_register_tools' method where the 'track_habit' tool is registered with the FastMCP instance.
    def _register_tools(self) -> None:
        """Register all habit-related MCP tools with the FastMCP instance."""
    
        # Wrapper function to inject dependencies and satisfy FastMCP signature
        async def _track_habit(ctx: ServerContext, id: str, date: str) -> dict[str, Any]:  # noqa: A002
            """MCP tool wrapper for track_habit_tool."""
            return await self.track_habit_tool(ctx, id, date)
    
        # Register the track_habit tool with FastMCP
        self.mcp.tool(
            name="track_habit", description="Track an activity for a specific habit on a given date"
        )(_track_habit)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'track' suggests a write operation (likely creating or updating a habit record), the description doesn't clarify whether this creates new entries, updates existing ones, requires specific permissions, has side effects, or provides confirmation of success. For a mutation tool with zero annotation coverage, this leaves significant behavioral questions unanswered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just 9 words, front-loading the core purpose without any wasted words. Every element ('track an activity,' 'for a specific habit,' 'on a given date') contributes essential information, making it efficiently structured despite its brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 required parameters, likely a mutation operation), the description is minimally complete. The existence of an output schema means the description doesn't need to explain return values, but with no annotations and poor parameter documentation, the description should provide more behavioral context and parameter guidance to be fully helpful to an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate but provides minimal parameter semantics. It mentions 'id' and 'date' parameters implicitly but doesn't explain what 'id' refers to (habit ID? user ID?), what format 'date' should be in, or what values are acceptable. The description adds some meaning by indicating these parameters exist but doesn't adequately compensate for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('track an activity') and the target resource ('for a specific habit'), providing a specific verb+resource combination. However, it doesn't distinguish this tool from sibling tools like 'create_journal_entry' or 'create_task' that might also track activities, leaving some ambiguity about when to use this specific habit-tracking tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, when-not-to-use scenarios, or how this differs from sibling tools like 'create_journal_entry' or 'create_task' that might serve similar tracking purposes. The agent receives minimal contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tensorfreitas/lunatask-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server