Server Details
Manage your dedicated AI assistant instances on [OpenClaw Direct](https://openclaw.com) through natural language. Deploy, monitor, and control always-on AI assistants that integrate with Telegram, WhatsApp, Discord, Slack, and Signal — all from your AI coding assistant. Learn more about the [MCP integration](https://openclaw.com/openclaw-mcp-integration).
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
See and control every tool call
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
14 toolscreate_employee_toolAInspect
Create a new employee instance (requires an active subscription or will provide a checkout link)
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name for the new employee instance | |
| tier | No | Hosting plan tier (default: advanced) | |
| billing_interval | No | Billing interval for new subscriptions (default: annual) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations by disclosing the subscription requirement and checkout link provision flow. This external dependency aligns with openWorldHint: true and explains what happens when preconditions aren't met, though it doesn't address idempotency concerns.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action ('Create...') and appends the critical subscription constraint. Every word earns its place with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and absence of output schema, the description adequately covers the tool's complexity by explaining the checkout link return behavior. It could improve by clarifying idempotency behavior (annotations indicate false) or what constitutes a duplicate creation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters (name, tier, billing_interval) including enums and defaults. The description adds no parameter-specific details, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create a new employee instance' with a specific verb and resource. However, it doesn't explicitly distinguish from sibling tools like 'rehire_employee_tool' (which also creates employee records) or clarify this is for new hires only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a usage constraint ('requires an active subscription or will provide a checkout link') which hints at preconditions. However, it lacks explicit guidance on when to use this versus 'rehire_employee_tool' or 'update_employee_tool' for existing employees.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
employee_timesheet_toolBRead-onlyInspect
Show usage statistics (timesheet) for an employee over a given number of days
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Number of days to look back (default: 30) | |
| identifier | Yes | Employee slug or ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true and openWorldHint=true, the description only adds domain context by specifying 'timesheet' as the type of usage statistics. It fails to explain behavioral implications of idempotentHint=false (why repeated calls might differ) or openWorldHint=true (external data access), which would help an agent understand caching or consistency expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded with the action verb 'Show', uses a parenthetical clarification '(timesheet)' efficiently, and contains no redundant or filler text. Every word contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 simple parameters) and rich annotations covering safety profiles, the description is nearly complete. However, the absence of an output schema means the description should ideally hint at the return structure (e.g., whether it returns daily entries or aggregates), which it does not provide.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. The description mentions 'over a given number of days' and 'for an employee', which map to the parameters, but adds no semantic value beyond the schema's own descriptions (e.g., no clarification on identifier format or business rules for the lookback window).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Show[s] usage statistics (timesheet) for an employee' with a specific time scope, using a concrete verb and resource. However, it does not explicitly differentiate from the sibling 'show_employee_tool', which could cause confusion about whether general employee data or specific timesheet data is needed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'show_employee_tool' or 'employee_wellness_check_tool', nor does it mention prerequisites such as required permissions or valid date ranges beyond the schema defaults.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
employee_wellness_check_toolARead-onlyInspect
Run a wellness (health) check on a specific employee or all your active employees
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | No | Employee slug or ID — omit to check all active employees |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only and non-destructive safety. The description adds the 'active employees' constraint (filtering out inactive), but fails to explain the idempotentHint=false (possible audit logging) or openWorldHint=true (external health data sources), leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the verb 'Run', zero redundancy. Every clause serves a purpose: defining the operation type (wellness check), target (employee), and scope modifier (specific vs all active).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with rich annotations, but lacks definition of what constitutes a 'wellness check' (medical records, survey status, etc.) and provides no indication of return values despite the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter is fully documented in the schema ('Employee slug or ID — omit to check all active employees'). The description text maps to this behavior but does not add syntax details, format examples, or semantic meaning beyond the schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Run a wellness/health check') and target resource ('employee'), distinguishing it from general employee retrieval tools like show_employee_tool by specifying the wellness domain. It also clarifies the scope covers both individual and batch ('all your active employees') operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the usage pattern for the parameter (specific employee vs. all active), but does not explicitly state when to use this tool versus sibling alternatives like show_employee_tool or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fire_employee_toolADestructiveInspect
Fire (terminate) an active or on-leave employee
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | Yes | Employee slug or ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructiveHint=true and idempotentHint=false. The description adds valuable precondition context ('active or on-leave') not captured in annotations. However, it fails to explain the practical implications of idempotentHint=false (whether retries error or succeed silently) or what 'openWorldHint' entails for external system effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence delivery with zero waste. The parenthetical '(terminate)' efficiently disambiguates 'Fire' without verbosity. Information is front-loaded with the action verb immediately followed by scope constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive HR operation with no output schema, the description meets minimum standards by specifying valid employee states. However, it omits critical context given the severity: whether the action is reversible (and which sibling tool reverses it), immediate effects on access/permissions, or what to expect upon invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not explicitly reference the 'identifier' parameter or add syntax guidance beyond the schema's 'Employee slug or ID' definition, but the high schema coverage means no additional compensation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Fire (terminate)') and identifies the target resource (employee). It adds scope clarification ('active or on-leave') which implicitly distinguishes valid preconditions from siblings like rehire_employee_tool (for already-terminated) and put_employee_on_leave_tool (which creates the on-leave state). However, it lacks explicit contrast with related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'active or on-leave employee' implies valid input states, providing implicit usage context. However, it fails to explicitly state when NOT to use this tool (e.g., already-terminated employees) or name alternatives like rehire_employee_tool for reversal or put_employee_on_leave_tool for temporary suspension.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_contracts_toolBRead-onlyInspect
List your employment contracts (subscriptions)
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by contract status (e.g. active, canceled, past_due) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover readOnlyHint and destructiveHint. The description adds the 'subscriptions' context which helps clarify the contract type, but fails to explain the unusual idempotentHint=false (list operations are typically idempotent) or the implications of openWorldHint=true.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (6 words) and front-loaded with the verb. While efficient, it borders on underspecified—it could acknowledge the filtering capability or scope without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 optional parameter, no nested objects, no output schema), the description meets minimum viability by identifying the resource. However, it lacks scope boundaries and does not address pagination or result set size expectations typical for list operations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not add any additional context about the 'status' parameter or its expected values, relying entirely on the schema's 'active, canceled, past_due' examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear verb ('List') and resource ('employment contracts'), with the parenthetical '(subscriptions)' adding helpful context about the contract type. However, it lacks scope clarification ('your' is ambiguous in an HR system with employer/employee contexts) and does not explicitly differentiate from sibling tools like list_plans_tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., when to use list_employees_tool vs list_contracts_tool), nor any prerequisites or filtering recommendations beyond the schema. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_employees_toolBRead-onlyInspect
List all employees (tenant instances) with status, tier, and health information
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by employee status: active, on_leave, fired, firing, onboarding, pending | |
| include_fired | No | Include fired (terminated/deleted) employees |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish the operation as read-only and non-destructive. The description adds valuable context about what data is returned (status, tier, health information), compensating for the lack of an output schema, but does not disclose pagination behavior, rate limits, or multi-tenancy isolation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded with the action verb and efficiently packs the resource name, scoping clarification, and return field summary without redundancy. 'Tenant instances' is technical but necessary context for this domain.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately covers the gap by specifying the returned fields (status, tier, health). It appropriately leaves detailed parameter syntax to the 100%-covered schema, though it could mention pagination or typical result set sizes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description does not add parameter-specific guidance beyond the schema (e.g., it doesn't explain the interaction between 'status' filter and 'include_fired' flag), but the schema already fully documents both parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List) and resource (employees), and adds parenthetical context '(tenant instances)' clarifying scope. However, it does not explicitly distinguish from sibling 'show_employee_tool' (singular retrieval) or when to use filtering vs. full listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'show_employee_tool' for single-employee lookups, nor does it mention prerequisites or suggest using the status filter for large datasets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_personalities_toolCRead-onlyInspect
List and search the personality library — browse available AI personality archetypes
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Filter by type: system, community. Omit to return both. | |
| query | No | Search personalities by name (case-insensitive) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds no behavioral context beyond annotations; doesn't explain non-idempotent reads, pagination, or open-world search behavior
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Brief and front-loaded with action verbs, though slightly too minimal for the complexity gap
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Critical gap: no output schema exists yet description fails to explain what gets returned (names, IDs, full objects?)
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear descriptions; tool description adds no parameter info but meets baseline
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verbs (list, search, browse) and specific resource (AI personality archetypes), effectively distinguishes from employee-focused sibling tools
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus list_employees or other list tools, nor when to filter by type
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_plans_toolARead-onlyInspect
List all available hosting plans with pricing and specifications
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds valuable context about what data is returned ('pricing and specifications'), which partially compensates for the missing output schema. However, it doesn't elaborate on the implications of openWorldHint or idempotentHint=false.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of nine words. Every word earns its place: 'List' (verb), 'all available' (scope), 'hosting plans' (resource), 'pricing and specifications' (returned fields). Zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameter-less read operation, the description is reasonably complete. It specifies the return payload ('pricing and specifications') which is crucial given the lack of output schema. Minor gaps include pagination behavior or filtering capabilities, though 'all' implies no filtering.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline score is 4 per the rubric. The description appropriately makes no mention of parameters since none exist, and the 100% schema coverage indicates no additional parameter documentation is needed from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List'), resource ('hosting plans'), and scope ('all available...with pricing and specifications'). It effectively distinguishes this tool from the HR-focused sibling tools (create_employee_tool, fire_employee_tool, etc.) by specifying a completely different domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While the domain differs from employee-related siblings like list_employees_tool or list_contracts_tool, there is no explicit 'when to use' guidance or explanation of why one might query hosting plans in an employee management context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
put_employee_on_leave_toolAInspect
Put an active employee on leave (suspend without firing)
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | Yes | Employee slug or ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context beyond annotations by clarifying this is a 'suspend' action rather than termination, which helps interpret destructiveHint=false. Does not contradict annotations (mutation matches readOnlyHint=false). Could enhance by noting non-idempotent behavior implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient: seven words deliver the action, target resource, and key sibling differentiation. The parenthetical '(suspend without firing)' is high-density information with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter mutation tool with clear annotations. Captures the core business logic (temporary suspension). Minor gap: could mention reversibility via return_employee_from_leave_tool given the HR workflow context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Employee slug or ID'), the schema fully documents the single parameter. The description adds no parameter-specific guidance, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Put on leave') + resource ('employee') + scope clarification ('without firing'). The parenthetical explicitly distinguishes this from the sibling fire_employee_tool, making the specific use case unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies preconditions by specifying 'active employee' and distinguishes from firing via 'without firing', but lacks explicit guidance on when to prefer this over return_employee_from_leave_tool or what happens if the employee is already on leave (idempotent=false suggests failure).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rehire_employee_toolAInspect
Re-hire (re-provision) a pending or previously fired employee
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | Yes | Employee slug or ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=false and destructiveHint=false. The description adds value by specifying 're-provision' (suggesting resource allocation) and limiting scope to specific employee states. However, it doesn't address idempotency concerns (idempotentHint=false) or explain side effects of the openWorldHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. The parenthetical '(re-provision)' adds technical nuance without verbosity, and the qualifying phrase 'pending or previously fired' precisely scopes the operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema) and the presence of annotations, the description is adequately complete. It explains the operation's purpose and target state sufficiently, though it could mention the identifier requirement or non-idempotent behavior explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Employee slug or ID'), the baseline is 3. The description does not mention the identifier parameter or provide additional semantic context (e.g., examples of slugs vs IDs), but the schema adequately documents the input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Re-hire'/'re-provision') and resource ('employee'), and explicitly qualifies the target as 'pending or previously fired.' This clearly distinguishes the tool from sibling create_employee_tool (for new hires) and fire_employee_tool (for termination).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use the tool ('pending or previously fired' employees), which implies the target state. However, it lacks explicit exclusions (e.g., 'do not use for currently active employees') or named alternatives (e.g., 'use create_employee_tool for new hires').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
return_employee_from_leave_toolBInspect
Return an on-leave employee to active duty (resume)
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | Yes | Employee slug or ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies the employee must be in 'on-leave' status (a precondition), adding context beyond the annotations. However, it does not explain what happens if invoked twice (idempotentHint=false), what errors occur if the employee isn't on leave, or how openWorldHint=true affects behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is appropriately brief and front-loaded with the verb. The parenthetical '(resume)' is slightly redundant with 'Return... to active duty' but serves as useful clarification without adding bulk.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter state transition tool with safety annotations provided, the description covers the core intent adequately. However, it lacks edge case documentation (e.g., handling of already-active employees) that would be necessary for a higher score given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('Employee slug or ID'), the schema fully documents the single parameter. The description adds no additional semantic information about identifier formats, validation rules, or lookup behavior, meriting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return... to active duty'), the target resource ('on-leave employee'), and distinguishes itself from siblings like put_employee_on_leave_tool and rehire_employee_tool by specifying this is a resume operation for existing employees currently on leave.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives such as rehire_employee_tool (for previously terminated staff) or update_employee_tool. It does not specify prerequisites (e.g., employee must be on leave status) or failure modes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
show_employee_toolBRead-onlyInspect
Show full details for a single employee (tenant instance)
| Name | Required | Description | Default |
|---|---|---|---|
| identifier | Yes | Employee slug or ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds '(tenant instance)' which contextualizes the openWorldHint annotation regarding multi-tenancy scope, and 'full details' hints at comprehensive data retrieval. However, it fails to explain why idempotentHint=false for a read operation (unusual) or what specific data domains are included in 'full details'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with action-fronted structure. No redundant words. The '(tenant instance)' qualifier is efficiently placed. However, the brevity borders on under-specification given the lack of output schema to define 'full details'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a low-complexity single-parameter read operation. Annotations cover safety (readOnlyHint, destructiveHint), reducing the description's burden. However, without an output schema, 'full details' remains vague regarding whether it includes nested relationships, permissions, or historical data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents the identifier parameter as 'Employee slug or ID'. The description adds no parameter-specific guidance (e.g., accepted formats, examples), meeting the baseline expectation when the schema carries full descriptive burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Show') and specific resource ('full details for a single employee'). It distinguishes from list_employees_tool (single vs. list) and show_employer_tool (employee vs. employer). However, it doesn't clarify how it differs from specialized siblings like employee_timesheet_tool or employee_wellness_check_tool that might also return employee data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. It doesn't mention that list_employees_tool should be used for discovery/search, or that specialized tools exist for timesheet/wellness data. No prerequisites or filtering guidance is provided despite the 'full details' scope being potentially expensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
show_employer_toolARead-onlyInspect
Show your employer (account) details and employee roster
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint. The description adds valuable context about the returned content (account details plus employee roster), but does not explain behavioral nuances like the idempotentHint: false status or the structure/format of the roster data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no redundant words. It front-loads the action ('Show') and immediately clarifies the scope with the parenthetical '(account)', making every word earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description partially compensates by mentioning 'employee roster,' but lacks detail on data structure, field types, or whether the employer details include sensitive administrative information. Adequate but incomplete for full comprehension.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, the baseline score applies. The description appropriately makes no parameter claims since the input schema is empty, meeting expectations without adding or omitting critical information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Show') and clearly identifies the resource ('employer (account) details and employee roster'). It effectively distinguishes from siblings like 'show_employee_tool' (individual employee) and 'list_employees_tool' (simple list) by specifying employer-level account information plus roster.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not clarify when to prefer this over 'list_employees_tool' for roster retrieval, or mention that this returns account-level metadata rather than individual employee records.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_employee_toolBInspect
Update an employee's name
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | New name for the employee | |
| identifier | Yes | Employee slug or ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations indicate openWorldHint=true and idempotentHint=false, the description adds no context about what external systems might be affected or why repeated calls with the same name might behave differently (e.g., audit logs, notifications).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient at four words with zero redundancy, though arguably too brief for a mutation tool with external side effects indicated by annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the mutation nature (readOnlyHint=false), openWorldHint=true suggesting external integrations, and rich sibling ecosystem, the description lacks necessary behavioral context about side effects, return values, or differentiation from other employee modification tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage ('New name for the employee', 'Employee slug or ID'), the schema fully documents parameters. The description mentions 'name' but adds no syntax constraints, validation rules, or format examples beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb (Update), resource (employee), and exact field (name), clearly distinguishing it from sibling tools like create_employee_tool, fire_employee_tool, or show_employee_tool which handle different lifecycle operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus alternatives like fire_employee_tool for termination or put_employee_on_leave_tool for status changes. No prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!