Skip to main content
Glama

Onboarding

onboarding
Read-only

Initialize Serena's coding assistant by setting up required configuration and preferences to enable symbolic operations in large codebases.

Instructions

Call this tool if onboarding was not performed yet. You will call this tool at most once per conversation. Returns instructions on how to create the onboarding information.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The OnboardingTool class implements the core logic of the 'onboarding' tool by generating an onboarding prompt using the system's platform information.
    class OnboardingTool(Tool):
        """
        Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
        """
    
        def apply(self) -> str:
            """
            Call this tool if onboarding was not performed yet.
            You will call this tool at most once per conversation.
    
            :return: instructions on how to create the onboarding information
            """
            system = platform.system()
            return self.prompt_factory.create_onboarding_prompt(system=system)
  • Helper method in the prompt factory that renders the specific 'onboarding_prompt' template, used by the onboarding tool handler.
    def create_onboarding_prompt(self, *, system: Any) -> str:
        return self._render_prompt("onboarding_prompt", locals())
  • Class method that derives the MCP tool name 'onboarding' from the class name 'OnboardingTool' by stripping 'Tool' and converting to snake_case.
    @classmethod
    def get_name_from_cls(cls) -> str:
        name = cls.__name__
        if name.endswith("Tool"):
            name = name[:-4]
        # convert to snake_case
        name = "".join(["_" + c.lower() if c.isupper() else c for c in name]).lstrip("_")
        return name
    
    def get_name(self) -> str:
        return self.get_name_from_cls()
  • ToolRegistry automatically discovers all subclasses of Tool in the serena.tools package and registers them by their derived name, including 'onboarding' from OnboardingTool.
    def __init__(self) -> None:
        self._tool_dict: dict[str, RegisteredTool] = {}
        for cls in iter_subclasses(Tool):
            if not any(cls.__module__.startswith(pkg) for pkg in tool_packages):
                continue
            is_optional = issubclass(cls, ToolMarkerOptional)
            name = cls.get_name_from_cls()
            if name in self._tool_dict:
                raise ValueError(f"Duplicate tool name found: {name}. Tool classes must have unique names.")
            self._tool_dict[name] = RegisteredTool(tool_class=cls, is_optional=is_optional, tool_name=name)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description adds context about the one-time-per-conversation constraint and that it returns instructions, which are useful behavioral details beyond the annotations. However, it doesn't describe error handling or what happens if called multiple times.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured with two sentences: the first states when to call it, and the second specifies the call frequency and return value. Every sentence adds essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, annotations covering safety, and an output schema (implied by context signals), the description is mostly complete. It covers purpose, usage guidelines, and behavioral constraints. However, it could briefly mention what the instructions entail or link to sibling tools for more context, but the output schema likely handles return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on usage context. A baseline of 4 is applied since there are no parameters to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to be called when onboarding hasn't been performed yet, and it returns instructions for creating onboarding information. It specifies the verb 'call' and the resource 'onboarding', but doesn't explicitly differentiate from sibling tools like 'check_onboarding_performed' or 'initial_instructions' beyond the conditional trigger.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Call this tool if onboarding was not performed yet' and 'You will call this tool at most once per conversation.' This clearly defines when to use it (onboarding not done) and includes a usage constraint (once per conversation), though it doesn't name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/oraios/serena'

If you have feedback or need assistance with the MCP directory API, please join our Discord server