Skip to main content
Glama
lin2000wl

Serena MCP Server

by lin2000wl

think_about_task_adherence

Check if you're still aligned with the original task before making code changes, especially after lengthy conversations with multiple exchanges.

Instructions

Think about the task at hand and whether you are still on track. Especially important if the conversation has been going on for a while and there has been a lot of back and forth.

This tool should ALWAYS be called before you insert, replace, or delete code.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Method that creates and renders the 'think_about_task_adherence' prompt template, likely part of the tool's logic or supporting utility.
    def create_think_about_task_adherence(self) -> str:
        return self._render_prompt("think_about_task_adherence", locals())
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool is for 'thinking' and should be called before certain actions, but does not disclose behavioral traits such as what the tool actually does (e.g., internal reflection vs. output), whether it has side effects, or how it affects the agent's state. This leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main purpose in the first sentence, followed by specific usage guidelines. It is appropriately sized with three sentences that each add value, though the second sentence could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's abstract nature (a 'thinking' tool with no parameters and no output schema), the description provides basic purpose and usage guidelines. However, it lacks details on what the tool outputs or how it influences the agent's behavior, which is important for such a meta-cognitive tool. With no annotations or output schema, the description is adequate but has clear gaps in explaining the tool's effect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so there are no parameters to document. The description does not need to add parameter semantics, and it appropriately avoids discussing parameters, meeting the baseline for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool is for 'thinking about the task at hand and whether you are still on track', which provides a vague purpose. It does not specify a concrete action or resource, and while it distinguishes from siblings by being a 'thinking' tool, the purpose remains abstract rather than specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Especially important if the conversation has been going on for a while' and 'ALWAYS be called before you insert, replace, or delete code.' It provides clear context and exclusions, distinguishing it from other tools by specifying prerequisites for code modification actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lin2000wl/Serena-cursor-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server