Skip to main content
Glama

Check Onboarding Performed

check_onboarding_performed
Read-only

Verify project onboarding completion before starting work to ensure proper setup and avoid redundant configuration steps.

Instructions

Checks whether project onboarding was already performed. You should always call this tool before beginning to actually work on the project/after activating a project.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The CheckOnboardingPerformedTool class provides the core handler logic for the 'check_onboarding_performed' tool. It checks if onboarding has been performed by querying available memories via ListMemoriesTool.
    class CheckOnboardingPerformedTool(Tool):
        """
        Checks whether project onboarding was already performed.
        """
    
        def apply(self) -> str:
            """
            Checks whether project onboarding was already performed.
            You should always call this tool before beginning to actually work on the project/after activating a project.
            """
            from .memory_tools import ListMemoriesTool
    
            list_memories_tool = self.agent.get_tool(ListMemoriesTool)
            memories = json.loads(list_memories_tool.apply())
            if len(memories) == 0:
                return (
                    "Onboarding not performed yet (no memories available). "
                    + "You should perform onboarding by calling the `onboarding` tool before proceeding with the task."
                )
            else:
                return f"""The onboarding was already performed, below is the list of available memories.
                Do not read them immediately, just remember that they exist and that you can read them later, if it is necessary
                for the current task.
                Some memories may be based on previous conversations, others may be general for the current project.
                You should be able to tell which one you need based on the name of the memory.
                
                {memories}"""
  • The get_name_from_cls method in the Tool base class derives the MCP tool name 'check_onboarding_performed' from the class name CheckOnboardingPerformedTool.
    def get_name_from_cls(cls) -> str:
        name = cls.__name__
        if name.endswith("Tool"):
            name = name[:-4]
        # convert to snake_case
        name = "".join(["_" + c.lower() if c.isupper() else c for c in name]).lstrip("_")
        return name
    
    def get_name(self) -> str:
        return self.get_name_from_cls()
  • ToolRegistry automatically discovers and registers all Tool subclasses in serena.tools packages, including CheckOnboardingPerformedTool as 'check_onboarding_performed'.
    for cls in iter_subclasses(Tool):
        if not any(cls.__module__.startswith(pkg) for pkg in tool_packages):
            continue
        is_optional = issubclass(cls, ToolMarkerOptional)
        name = cls.get_name_from_cls()
        if name in self._tool_dict:
            raise ValueError(f"Duplicate tool name found: {name}. Tool classes must have unique names.")
        self._tool_dict[name] = RegisteredTool(tool_class=cls, is_optional=is_optional, tool_name=name)
  • Import of workflow_tools makes CheckOnboardingPerformedTool available for auto-discovery by ToolRegistry.
    from .workflow_tools import *
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and destructiveHint=false, which the description aligns with by implying a non-destructive check. The description adds value by specifying the tool's role in workflow sequencing (before work/after activation), but doesn't provide additional behavioral details like error handling or output interpretation beyond what annotations cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose followed by usage guidelines. Every word serves a clear function, with no redundancy or unnecessary elaboration, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, annotations covering safety (read-only, non-destructive), and an output schema (implied by context signals), the description is reasonably complete. It explains what the tool does and when to use it, though it could benefit from hinting at the output's meaning (e.g., boolean result or status details) to fully compensate for lack of output schema explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description doesn't mention parameters, which is appropriate. A baseline of 4 is applied since no parameters exist, and the description focuses correctly on the tool's purpose and usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Checks whether project onboarding was already performed.' It specifies the verb ('checks') and resource ('project onboarding'), making the intent unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'onboarding' or 'activate_project', which might have overlapping contexts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'You should always call this tool before beginning to actually work on the project/after activating a project.' This gives clear timing and context for when to use it, including a reference to the sibling tool 'activate_project' as a related action.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/oraios/serena'

If you have feedback or need assistance with the MCP directory API, please join our Discord server