ContextLayer
Server Details
Intelligent context infrastructure for AI teams: knowledge graph, sessions, tasks, documents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 62 of 62 tools scored. Lowest: 2.9/5.
Tools are organized into clear domains (org, team, project, task, session, document, knowledge graph, notifications, positions, stats). Most have distinct purposes, though 'whoami' and 'get_my_stats' overlap slightly, and some session/task creation tools could be confused if not read carefully.
Almost all tools follow a consistent verb_noun snake_case pattern (e.g., list_teams, create_task, update_project). The only deviation is 'whoami', which is a single word but is a common idiom. Overall, the naming is highly predictable and systematic.
With 62 tools, the server is extremely large and covers an unusually broad range of operations for a single MCP server. While each tool may be justified, the sheer number makes it difficult for agents to navigate and select correctly, exceeding the typical well-scoped range of 3-15 tools.
The tool set provides full CRUD and lifecycle coverage for organizations, teams, projects, tasks, sprints, sessions, documents, positions, and notifications. It includes advanced features like task dependencies, context routing, and batch operations. No obvious gaps are apparent for the stated purposes.
Available Tools
62 toolsaccept_org_inviteBInspect
Accept a pending organization invite by organization name.
| Name | Required | Description | Default |
|---|---|---|---|
| org_name | Yes | Organization name to accept the invite for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description is minimal and does not disclose side effects, permissions needed, reversibility, or what happens upon acceptance. Without annotations, the description should provide more behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no extraneous words. The purpose is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema or annotations, the description lacks completeness. It does not mention return values, error conditions, or state changes, which are needed for full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds no extra meaning beyond the schema's description of 'org_name'. Baseline 3 is appropriate as the schema already documents the parameter adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (accept) and the object (organization invite), with the condition of using the organization name. It distinguishes itself from sibling tools like 'list_pending_invites' which list invites, and 'invite_user' which sends invites.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use versus alternatives, such as checking for pending invites first with 'list_pending_invites', or prerequisites. The description implies usage but does not specify context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_task_dependencyAInspect
Add a dependency between tasks. Use this to express that a task is blocked by another task (must complete first) or relates to it. This prevents context contamination: when Task B depends on Task A, agents working on Task B will be warned if Task A is still in progress.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | The task that is blocked (depends on the other task) | |
| dependency_type | No | Dependency type: 'blocks' (must complete before) or 'relates_to' (informational link). Default: blocks | |
| depends_on_task_id | Yes | The task that blocks (the dependency / prerequisite) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears the full burden. It describes a behavioral effect (warn if prerequisite in progress, prevent context contamination) but does not disclose authorization needs, potential errors (e.g., circular dependencies), or side effects. The description adds some value but is not fully transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, concise, and front-loaded with the main action. Every sentence adds value—purpose, usage context, and an example of the benefit.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not mention return values or error handling. It explains the core functionality and context contamination prevention, but lacks details on success/failure responses or error cases, leaving some gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%; each parameter already has clear descriptions. The tool description does not add meaningful information beyond the schema, achieving the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it adds a dependency between tasks, specifying the relationship (blocked or relates). It uses a specific verb and resource, distinguishing it from sibling tools like remove_task_dependency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool: to indicate that a task is blocked by another or relates to it. It also mentions context contamination, providing context. However, it does not explicitly state when not to use it or compare to alternatives, though the sibling tree implies removal is a separate action.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_team_memberAInspect
Add a member to a team. Find the team by name or ID, and the user by name or email.
| Name | Required | Description | Default |
|---|---|---|---|
| role | No | Role in team: member or lead (default: member) | |
| user | Yes | User name or email to add | |
| team_id | No | Team ID (optional if team_name is provided) | |
| team_name | No | Team name (optional if team_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states the action without detailing side effects, required permissions, error conditions (e.g., user already in team), or mutation implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise sentences with no unnecessary fluff. Every word adds value, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description is minimal. It explains the core action and parameter identification but omits return value, error scenarios, or prerequisites. This is adequate for a simple tool but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the schema already explains each parameter. The description adds value by clarifying the alternative usage of team_id/team_name and that user can be name or email, which goes beyond the schema's individual descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add a member to a team') and the resource ('team'), with explicit details on how to identify the team (by name or ID) and the user (by name or email). This distinguishes it from sibling tools like 'remove_team_member'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on how to use the tool (identify team via name/ID, user via name/email). However, it lacks explicit 'when to use' vs 'when not to use' guidance or references to alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
assign_taskAInspect
Assign a user to a task by name or email. The user is added to the task's assignees and linked in the knowledge graph.
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | User name or email to assign | |
| task_id | Yes | Task ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of transparency. It discloses the effect (user added to assignees and linked in knowledge graph) but does not specify whether it appends or overwrites, nor does it mention authorization or side effects. This is partial disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two clear, front-loaded sentences with no filler. Every sentence contributes value: the first states the action, the second explains the consequence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema), the description explains the key side effect and the input format. It lacks only the return value and any error conditions, but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, with both parameters described. The description adds no new meaning beyond the schema, simply restating that assignment is by name or email. Baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: assigning a user to a task by name or email. It specifies the resource (task), the action (assign), and the method (by name or email). This distinguishes it from sibling tools like unassign_task and add_team_member.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives. It lacks preconditions (e.g., user must exist, task must exist) and does not mention the complementary unassign_task tool. Usage is only implied by the purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_positionAInspect
Create a new job position/title in your organization (admin only).
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Position title (e.g. 'Frontend Developer') | |
| description | No | Position description (optional) | |
| permissions | No | JSON object mapping permission keys to booleans, e.g. {"can_view_all_projects": true, "can_manage_documents": false}. Use list_positions to see all available permission keys. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It mentions 'admin only' (authorization trait) but does not disclose side effects, idempotency, or behavior on duplicates. Basic but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It is front-loaded with the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the basic purpose, it omits return value details and error scenarios. Given no output schema and no annotations, the description could be more informative for a creation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all parameters. The description adds no extra parameter information beyond the context of admin-only. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the verb 'Create' and the resource 'job position/title', distinguishing it from sibling tools like delete_position, update_position, and list_positions. The addition of 'admin only' further clarifies scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates an admin-only constraint but provides no explicit guidance on when to use this tool versus alternatives like update_position. It lacks 'when not to use' or sibling comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_projectBInspect
Create a new project in your organization. You will be added as the project owner.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Project name | |
| description | No | Project description (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses user becomes owner, but lacks details on defaults, visibility, or side effects. With no annotations, more behavioral transparency is expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with core action, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple create tool with two parameters; covers purpose and key outcome but misses defaults or post-creation context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds no extra meaning beyond the schema fields. Baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Create a new project in your organization' with specific verb and resource, and adds that user becomes owner, distinguishing it from update/delete siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like update_project or delete_project, nor prerequisites such as needing an organization.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sprintAInspect
Create a new sprint for a project. Automatically migrates non-completed tasks (todo/in_progress) from the previous sprint. On first use, creates Sprint 0 (Backlog) for existing tasks and Sprint 1 as the active sprint.
| Name | Required | Description | Default |
|---|---|---|---|
| goal | No | Sprint goal (optional) | |
| name | No | Sprint name (default: 'Sprint N') | |
| project_id | No | Project ID (alternative to project_name) | |
| project_name | No | Project name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It clearly explains creation and migration behavior, including first-use details. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently convey purpose and behavior. No extraneous information. Front-loaded with core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and moderate complexity, the description sufficiently covers creation, migration, and first-use behavior. Schema covers params. Complete enough for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 4 parameters. The description adds no additional parameter meaning beyond the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a new sprint for a project and describes automatic task migration. It distinguishes from sibling tools like list_sprints and move_task_to_sprint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use (to create a sprint) and describes automatic migration behavior, including first-use scenario. It does not explicitly state when not to use or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_taskBInspect
Create a new task in a project. Tasks track work items and can be assigned to team members and linked to sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Tags for categorization (e.g. ['bug', 'frontend']) | |
| title | Yes | Task title | |
| due_at | No | Due date in ISO 8601 format (e.g. '2026-03-15T00:00:00Z') | |
| team_id | No | Team ID (alternative to team_name) | |
| priority | No | Priority: low, medium, high, urgent (default: medium) | |
| assignees | No | Assignee names or emails (optional — resolves to user IDs) | |
| team_name | No | Team name to assign to (optional — resolves to team ID) | |
| depends_on | No | Task IDs that this task depends on / is blocked by (optional). Creates 'blocks' dependencies. | |
| project_id | No | Project ID (alternative to project_name) | |
| description | No | Task description (optional) | |
| project_name | No | Project name to create the task in | |
| sprint_number | No | Sprint number to assign the task to (optional — auto-assigns to current sprint if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It only states the creation action and mentions linking to sessions, but lacks details on permissions, idempotency, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the action, and contains no fluff. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high parameter count (12) and no output schema, the description is somewhat minimal. It explains the purpose well but does not describe return values or provide context on how to handle the many optional parameters. Schema descriptions help, but overall completeness is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-described in the schema. The description adds minimal extra meaning (e.g., 'can be assigned to team members and linked to sessions' hints at assignees and session linking), but not enough to raise the score above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Create a new task) and the resource (in a project). It also explains what tasks are for, but does not differentiate from sibling tools like update_task or assign_task.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as assign_task or update_task. It does not mention prerequisites or conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_teamAInspect
Create a new team in your organization. You will be added as team lead.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Team name | |
| objectives | No | Team objectives as a list of strings (optional) | |
| description | No | Team description (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds the key behavioral detail that the caller will be added as team lead, which goes beyond the input schema. With no annotations provided, this is a positive disclosure for a creation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two short sentences that efficiently convey the purpose and a notable side effect. No unnecessary words are present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity of the tool (3 parameters, no output schema), the description adequately covers the essential purpose and side effect. It could be complete enough for an agent to understand the tool's function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description does not add any additional parameter context beyond what the schema already provides, so it meets the baseline without enhancement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Create' and resource 'team', and explicitly notes that the caller will be added as team lead, which distinguishes it from other creation tools on the server such as create_project or create_task.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for creating a new team, but does not provide explicit guidance on when to use this versus alternative tools like update_team or delete_team. No context or exclusions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_documentAInspect
Delete a document from the knowledge base. This permanently removes the document.
| Name | Required | Description | Default |
|---|---|---|---|
| document_id | Yes | Document ID to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, the description carries full burden and states 'permanently removes' which is critical behavioral info for a destructive operation. However, it lacks details on side effects, error handling, or authentication requirements that would make it fully transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, front-loading the action 'Delete a document' and following with the critical permanence note. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool, the description covers the action and permanence. However, it omits context like whether the document must exist, what happens to associated data, or any confirmation steps. With no output schema, additional detail on return behavior would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single parameter 'document_id' which is self-explanatory. The description adds no extra semantics beyond what the schema provides, so baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete' and the resource 'document', specifying the scope 'from the knowledge base'. It distinguishes from sibling tools like 'delete_task' or 'delete_user' by focusing on document deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as archiving or updating a document. It does not mention prerequisites or conditions (e.g., required permissions) that might affect usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_positionAInspect
Delete a position permanently (admin only). Find by title or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| position_id | No | Position ID (optional if position_title is provided) | |
| position_title | No | Position title (optional if position_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the permanent and destructive nature of the operation, but lacks details on side effects (e.g., impact on associated data), error conditions, or response behavior. With no annotations provided, the description carries the full burden, and while it addresses the core behavioral trait, it is incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two short sentences. Every word adds value, and there is no redundancy or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with no output schema, the description does not specify what the tool returns (e.g., success confirmation, error handling) or provide details about prerequisites or post-conditions. This leaves the agent with insufficient information to fully understand the tool's behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for both parameters, so the description adds little beyond what the schema already states. The description mentions 'Find by title or ID' which aligns with the schema's mutual exclusivity, but does not provide additional semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete), the resource (position), and key constraints (permanent, admin only). It also explains how to identify the position (by title or ID), effectively distinguishing it from sibling tools like update_position or list_positions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates who can use the tool (admin only) and implies that deletion is irreversible, but does not explicitly state when to use this tool versus alternatives or provide exclusion criteria. The usage context is implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_projectAInspect
Delete a project permanently (admin only). Find by name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | No | Project ID (optional if project_name is provided) | |
| project_name | No | Project name (optional if project_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses permanence and admin restriction, but does not detail consequences like cascading deletion of tasks, documents, or team associations. This is adequate but not thorough for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no superfluous words. Every word adds value: verb, resource, permanence, permission, and identification method.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple deletion tool with two parameters and no output schema, the description is minimal but adequate. However, it lacks information about side effects (e.g., impact on related entities) which would be useful for an irreversible action. It meets the minimum viable standard.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%: both parameters (project_id, project_name) are already described clearly with mutual exclusivity. The description adds 'Find by name or ID', which reinforces the schema but does not introduce new semantic meaning beyond what is already present.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('delete'), the resource ('project'), and key qualifiers ('permanently', 'admin only'). It also specifies how to identify the project ('by name or ID'), and the tool is distinct from sibling delete tools like delete_task or delete_document.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use ('admin only') but does not explicitly state when not to use or suggest alternatives (e.g., deactivating a project via update_project). It conveys the prerequisite but lacks comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_taskAInspect
Delete a task permanently. The task and all its assignee links are removed.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | Task ID to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses permanent deletion and removal of assignee links. Could elaborate on prerequisites or side effects, but sufficient for core behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences, front-loaded with key information. No filler or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simplicity (1 param, no output schema, no annotations), description covers purpose, permanence, and side effects. Minor gaps like authorization or error conditions but adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. Description adds no extra meaning beyond the schema's parameter description. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states verb 'Delete', resource 'task', and clarifies permanence and removal of assignee links. Clearly differentiates from update and create siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use vs alternatives. Usage is implied but adequate for a simple deletion tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_teamAInspect
Delete a team permanently (admin only). Find by name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID (optional if team_name is provided) | |
| team_name | No | Team name (optional if team_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that deletion is permanent, which is crucial for a destructive operation. With no annotations, it partially meets the burden but lacks details on consequences (e.g., cascading effects) or recovery options.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that conveys all essential information without unnecessary words. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple delete operation with no output schema, the description covers purpose, privilege, and identification. It could mention irreversibility more explicitly or expected outcome, but it is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds marginal value by confirming the two parameters are alternatives, but it doesn't provide new information beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete a team permanently) and specifies who can use it (admin only). It also explains how to identify the team (by name or ID), distinguishing it from other team-related tools like create_team or update_team.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear prerequisite (admin only) and hints at parameter usage (by name or ID). However, it does not explicitly state when to avoid this tool or mention alternatives for similar operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_userBInspect
Delete (soft-delete) a user from your organization (admin/owner only).
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | User name or email to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It states 'soft-delete' but does not explain whether the operation is reversible, what happens to associated data, or any side effects. This is insufficient for a deletion tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence of 12 words, no unnecessary words, and front-loads the key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 param, no output schema), the description is incomplete. It lacks details about reversibility, confirmation, response format, and potential side effects, which are important for a user deletion tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter 'user', so the schema already provides its meaning. The description adds no additional semantics beyond what is in the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete' (with soft-delete clarification) and the resource 'a user from your organization'. It includes an important usage qualifier (admin/owner only), but does not distinguish from the sibling tool 'remove_user_from_org', which could be interpreted as a similar operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a prerequisite (admin/owner only), but lacks guidance on when to use this tool versus alternatives like 'remove_user_from_org' or 'invite_user'. No explicit when-not or alternative tools are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
end_sessionAInspect
End a work session. This triggers context block generation — your activities are summarized and stored in the knowledge graph for future reference.
| Name | Required | Description | Default |
|---|---|---|---|
| outcome | Yes | Outcome: success, failure, partial, abandoned | |
| session_id | Yes | Session ID to end | |
| token_usage | No | Token usage for this session. JSON object with fields: input_tokens (int), output_tokens (int), total_tokens (int), model (string), estimated_cost_usd (float). Optional — set when the agent can report consumption. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the behavioral side effect of context generation and storage, which is useful given no annotations, but it does not clarify if the session is irreversibly terminated or any other behavioral details beyond the stated side effect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences that front-load the primary action and then add a valuable behavioral note; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the tool's purpose and side effect, it lacks guidance on usage context and does not describe the return value (no output schema), leaving some gaps for an agent to confidently invoke the tool without additional clues.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already has 100% description coverage for all parameters, and the tool description adds no additional meaning beyond what is in the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action 'End a work session' and its primary side effect of triggering context block generation and knowledge graph storage, distinguishing it from related sibling tools like start_session.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidelines are provided; the description does not specify when to use this tool (e.g., only active sessions), nor does it mention alternatives or prerequisites, leaving the agent to infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_documentsAInspect
Search documents in the organization's knowledge base. Filter by category, tags, project, or free-text query. Returns document metadata — use get_document to read the full content.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Filter by tags | |
| limit | No | Maximum results (default 10) | |
| query | No | Search query (matches title and content) | |
| category | No | Filter by category: template, policy, procedure, reference, contract, guide, checklist | |
| team_name | No | Filter by team name | |
| project_name | No | Filter by project name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description does not mention behavioral traits like read-only, authorization needs, or rate limits. For a search tool, it's acceptable but lacking depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and filters, no redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks details about returned metadata fields or pagination. Adequate but not complete given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description summarizes filter options but adds little beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Search', resource 'documents', and scope 'organization's knowledge base'. Lists filters and distinguishes from get_document sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use (searching documents) and points to get_document for full content. No explicit when-not or additional alternatives, but the guidance is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_contextAInspect
Get intelligent context from the knowledge graph. Pass a query describing what you need — the system will automatically route to the right data sources (tasks, sessions, documents, teams, blocks) based on intent analysis + graph traversal + semantic similarity. Optionally narrow scope with project name or session ID.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The user's question or intent — used for intelligent context routing. Describe what context is needed. | |
| max_tokens | No | Maximum tokens for the response (default 500) | |
| project_id | No | Project ID (optional — if omitted, returns across all projects) | |
| session_id | No | Session ID (optional — if omitted, returns user-level context) | |
| project_name | No | Project name (optional, alternative to project_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. Discloses use of intent analysis, graph traversal, and semantic similarity for routing. However, does not mention any side effects, auth requirements, or limitations (e.g., read-only behavior).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: purpose, mechanism, optional scope. No redundant words, front-loaded with key action. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complex multi-source routing and many sibling tools, the description adequately explains scope and mechanism. Lacks output schema but per rules that's not required. Could be more explicit about read-only nature, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value by explaining the query's purpose ('describe what context is needed'), stating default for max_tokens, and clarifying project_name as an alternative to project_id.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool gets intelligent context from the knowledge graph, using a query for automatic routing. Differentiates from sibling tools like get_document or get_session_details by emphasizing multi-source, intent-driven retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit instructions: pass a query and optionally narrow scope with project name or session ID. Does not explicitly state when not to use or mention alternatives, but the intelligent routing hints at its unique role among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_context_routingAInspect
Get the current context routing mode for the organization. Returns 'keyword_llm', 'keyword_only', or 'llm_only'.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully covers the expected behavior: it returns one of three possible values. It does not note any side effects or permissions, but as a simple getter, this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence, no filler. The most critical information is front-loaded: the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with no parameters and no output schema, the description is complete: it states the purpose and the exact return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so the baseline is 4. The description adds value by specifying the possible return values, which are not in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), the resource ('context routing mode for the organization'), and the exact return values ('keyword_llm', 'keyword_only', 'llm_only'). It distinguishes from the sibling tool 'set_context_routing' by its read-only nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for reading the current context routing mode, contrasting with 'set_context_routing'. However, it does not provide explicit guidance on when to use (e.g., before setting a new mode) or mention any prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_documentAInspect
Get the full content of a document by its ID. Use find_documents first to search, then get_document to read the content. For binary documents (PDFs, images), content is returned base64-encoded with encoding='base64' and the mime_type field.
| Name | Required | Description | Default |
|---|---|---|---|
| document_id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It adds value by explaining base64 encoding for binary docs and mime_type field, but does not mention potential effects, authentication, or limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences that front-load the primary purpose and follow with essential workflow and encoding details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description adequately covers the tool's behavior for a simple read operation. The base64 encoding detail adds necessary completeness for binary files.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters (single document_id) with description. The tool description does not add new semantic information beyond 'by its ID'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get the full content') and the resource ('document by its ID'). It differentiates from sibling tools by suggesting 'find_documents first, then get_document to read the content'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides a workflow: use find_documents first, then get_document. Also gives specific guidance for binary documents, detailing encoding and field names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_live_activityBInspect
See what's happening right now: active sessions, recent events, traces, and tasks being worked on.
| Name | Required | Description | Default |
|---|---|---|---|
| team_name | No | Filter by team name (optional) | |
| user_name | No | Filter by user name (optional, partial match) | |
| project_name | No | Filter by project name (optional) | |
| recent_minutes | No | Time window for recent events in minutes (default: 30) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description is the sole source of behavioral info. It indicates the tool is a read-only operation showing aggregated live data. However, it lacks details on data freshness, pagination, or limits, making it only moderately transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise at one sentence, front-loading the key idea. It is appropriately sized, but could benefit from slightly more structure to list the data types it aggregates (e.g., sessions, events, traces, tasks). Still, it earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should explain the return format and behavior more fully. It lacks details on how filters combine, default time windows, or what 'recent' means. This leaves gaps for an agent to effectively use the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes all parameters with 100% coverage (team_name, user_name, project_name, recent_minutes). The description does not add any additional meaning or context beyond what the schema provides, so the baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to see what's happening right now, covering active sessions, recent events, traces, and tasks. This distinguishes it from sibling tools like list_sessions or list_tasks, which are more specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool vs. alternatives like list_sessions or list_tasks. No exclusions, prerequisites, or context for when it is appropriate to use the aggregated view.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_my_statsAInspect
Get your personal activity statistics: total sessions, events, active days, projects worked on, and recent activity summary.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description clearly states what statistics are returned. Since no annotations are provided, the description carries the burden and adequately describes the output. No mention of mutability or side effects, but as a read-only tool, that is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that lists the key return values. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description sufficiently explains what the tool returns. Given no parameters and no output schema, it provides enough context for the agent to determine if this tool is appropriate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the description does not need to add parameter details. The input schema is empty, and the description covers the purpose adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' and resource 'your personal activity statistics', listing key components (sessions, events, etc.). It distinguishes from sibling tools like get_org_stats by focusing on personal data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for retrieving the current user's activity statistics. However, it does not explicitly state when to use it versus other 'get' tools, or provide any exclusions or prerequisites. Context is clear but guidance is minimal.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_org_statsBInspect
Get comprehensive organization statistics: counts, engagement metrics, task velocity.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full burden. It does not disclose whether the tool is read-only, requires authentication, has rate limits, or returns cached vs live data. The description only lists output categories without behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is concise but lacks structure (e.g., no bullet points or separators). It front-loads the purpose but omits important details like output format or usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and simple parameters, the description should explain the return value more thoroughly. It lists three metric types but no specifics on format, pagination, or scope. Incomplete for a comprehensive stats tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters with 100% coverage (none). Baseline is 4 since no parameter info is needed. The description adds value by indicating the tool returns organization-level statistics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get comprehensive organization statistics' with specific metrics (counts, engagement, task velocity). Distinguishes from siblings like get_my_stats (personal) and get_team_details (team-level).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as get_my_stats or get_project_status. The description does not mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_project_statusAInspect
Get the current status of a project including recent activity. You can use either the project name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | No | Project ID (optional if project_name is provided) | |
| project_name | No | Project name (optional, alternative to project_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries the full burden. It does not disclose behavioral traits such as permissions, rate limits, or side effects. The mention of 'recent activity' hints at return content but lacks detail on behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the purpose and includes necessary parameter guidance without excess. Every part is meaningful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should provide more detail on the return format or structure. It mentions 'current status including recent activity' but does not specify what that entails. The parameter handling (both optional) is not fully explained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already describes parameters with 100% coverage. The tool description adds value by clarifying that project_id and project_name are alternatives, which is not explicit in the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the current status of a project including recent activity, which is distinct from sibling tools like 'list_projects' or 'update_project'. It also specifies the use of project name or ID as identifiers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving status of a specific project but provides no explicit guidance on when to use this tool versus alternatives, nor when not to use it. No exclusions or context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_session_detailsAInspect
Get full details of a session including all events/activities logged during it. Shows who did what, when, and the session block summary if available. Admins can view any session in the org; members can only view their own.
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes | Session ID to get details for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description covers basic behavior (gathering details, events, activities, block summary) and access control. However, without annotations, it lacks deeper behavioral details like read-only nature, response format, error conditions, or rate limits, which are not fully disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: two sentences. The first sentence clearly states the purpose and scope, and the second adds essential access constraints. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 param, no output schema), the description covers purpose, scope, and access control adequately. It could mention what happens for invalid session IDs or direct users to list_sessions to obtain session IDs, but overall it's nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the single parameter 'session_id' with 100% description coverage. The description does not add additional meaning beyond the schema's description, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets full details of a session including events/activities, who did what, when, and session block summary. This is a specific verb+resource combination that distinguishes it from siblings like list_sessions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear access control guidance: admins can view any session, members only their own. This helps agents decide when to use the tool. However, it does not explicitly name alternative tools for similar tasks or state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_team_detailsAInspect
Get detailed information about a team: members (with roles), linked projects, and recent activity. Use team name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID (optional if team_name is provided) | |
| team_name | No | Team name (optional if team_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavior. It states it retrieves details and implies a read operation, but lacks details on permissions, rate limits, pagination, or whether the data is live/cached. The description is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. It front-loads the purpose and immediately lists key content, then clarifies parameter usage. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the essential aspects: what the tool does and what data it returns (members, roles, projects, activity). Without an output schema, it gives a clear picture of return content. However, it lacks details on response structure or example, and doesn't mention read-only nature, but this is acceptable for a simple get tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; both parameters have descriptions. The description adds 'Use team name or ID' but this is already implied by the schema's optionality. No new semantic meaning beyond the schema is provided, so baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get), the resource (team details), and lists specific data included: members with roles, linked projects, and recent activity. This distinguishes it from sibling tools like list_teams which likely provide only basic info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Use team name or ID' but does not provide explicit guidance on when to use this tool versus alternatives (e.g., list_teams for overview, get_project_status for project details). No when-not-to-use or exclusions are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
invite_userAInspect
Invite a new member to your organization (admin/owner only).
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Full name | |
| role | No | Role: member or admin (default: member) | |
| Yes | User email address | ||
| skills | No | Skills list (optional) | |
| positions | No | Position titles to assign (optional — resolves by name) | |
| team_name | No | Team name to add user to (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only states the action and access level, omitting details like side effects (e.g., email sending, pending state) or error conditions (e.g., duplicate email).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no filler. It front-loads the core purpose and constraint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given six parameters and no output schema, the description is too minimal. It lacks details on return values, validation, or process flow, which are important for a multi-parameter invitation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the input schema fully describes each parameter. The description adds no extra meaning, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action—inviting a new member to the organization—and adds an access constraint (admin/owner only). It distinguishes from sibling tools like accept_org_invite and remove_user_from_org.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to invite new members) and specifies the required role, but does not explicitly discuss when not to use or compare to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
link_task_sessionBInspect
Link a work session to a task. This records that the session was used to work on the task, connecting them in the knowledge graph.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | Task ID to link | |
| session_id | Yes | Session ID to link to the task |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It states the action is recording in the knowledge graph, but does not mention idempotency, error handling (e.g., duplicate link), required permissions, or if the task/session IDs must be valid. For a mutation tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two clear sentences with no redundancy. It is front-loaded with the primary action and provides essential context without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (2 required string parameters, no output schema), the description covers the core behavior. It could mention idempotency or validation, but is largely complete for a straightforward linking operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with clear descriptions for both parameters ('Task ID to link' and 'Session ID to link to the task'). The description adds minimal semantic value beyond the schema by noting the knowledge graph connection. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('link') and the two resources ('work session' and 'task'), with additional context about recording in the knowledge graph. It distinguishes itself from siblings like 'assign_task' or 'add_task_dependency' by focusing on session-task linking, though the differentiation is implicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., assign_task, log_activity). It does not specify prerequisites, such as whether the task or session must already exist, or when this linking is appropriate compared to other actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
link_team_projectBInspect
Link a project to a team. Find both by name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID (optional if team_name is provided) | |
| team_name | No | Team name (optional if team_id is provided) | |
| project_id | No | Project ID (optional if project_name is provided) | |
| project_name | No | Project name (optional if project_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It merely states the action without disclosing side effects, permissions required, error conditions (e.g., duplicate link, missing entities), or whether the operation is reversible. This minimal disclosure is insufficient for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with two sentences, front-loading purpose and a key detail about lookup method. However, the extreme brevity sacrifices substantive guidance; a small trade-off for conciseness is acceptable, resulting in a high but not perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations and output schema, the description is incomplete. It fails to clarify the result of linking (e.g., confirmation, returned data), preconditions, or relationship to other entities. The agent may need to infer too much for reliable use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides high-coverage descriptions for all parameters, including optionality and mutual exclusivity. The tool description adds no additional semantic value beyond 'Find both by name or ID', which weakly reinforces the schema's information. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The verb 'Link' clearly indicates the action of associating a project with a team, and the resource is explicitly stated as 'project to a team'. The description distinguishes it from sibling tools like 'unlink_team_project' and 'create_project', making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs. alternatives. It does not mention prerequisites (e.g., team and project must exist), nor does it advise against using when the link already exists or when creation is needed. The agent is left to infer usage context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_colleaguesAInspect
List colleagues in your organization. Shows name, email, role, and optionally filters by team. Admins see all users; members see users in their own teams.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results (default 20) | |
| team_name | No | Filter by team name (optional — shows only members of that team) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that admins see all users while members see only their own teams, which is a key behavioral trait. It does not explicitly state it is a read-only operation, but the context suggests no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded and concise. Every sentence provides unique value: the first states the basic function, the second adds filtering and role-based visibility. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with no output schema, the description covers the return fields (name, email, role), optional filter, and access control. It is complete enough for an agent to understand what to expect and when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description mentions filtering by team_name, which aligns with the schema. It does not add extra meaning beyond the schema's descriptions for 'limit' or 'team_name'. The description is consistent but not additive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists colleagues in the organization, specifies the information shown (name, email, role), and mentions optional filtering by team. It also distinguishes from siblings by noting admin vs member visibility, which is unique to this tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to list organization members) and provides context on visibility based on role. It mentions optional filtering by team but does not explicitly compare with sibling tools like 'list_teams' or 'get_team_details'. However, the guidance is clear enough for typical usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_notificationsAInspect
List your notifications — task assignments, status changes, sprint updates, invites, and more. Returns unread notifications by default.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results (default 20) | |
| unread_only | No | Only return unread notifications (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for behavioral disclosure. It mentions the default unread filter but omits details like whether the tool is read-only, if it marks notifications as read, pagination behavior, or ordering. This leaves significant gaps for the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys the tool's purpose and default behavior without any superfluous words. Every piece of information is valuable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with two optional parameters and no output schema, the description covers the key aspects: what it returns and default filtering. The limit parameter is documented in the schema. Minor gaps (e.g., ordering, whether only user's own notifications) prevent a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Both parameters have descriptions in the input schema (100% coverage). The description's mention of 'unread by default' adds confirming context for the unread_only parameter but does not meaningfully augment the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and resource 'notifications', and enumerates specific types (task assignments, status changes, sprint updates, invites). It also specifies the default behavior of returning unread notifications, making the purpose precise and distinct from sibling tools like mark_notification_read.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to list notifications) but does not explicitly state when to avoid or compare with alternatives such as list_pending_invites or mark_notification_read. No usage context or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_pending_invitesAInspect
List pending organization invites for the current user.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description implies a read-only operation ('list'), but with no annotations, it does not explicitly state behavior such as authentication requirements, side effects, or empty response handling. Adequate for a simple list but could be more explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 8 words, immediately conveys purpose. No redundant information, excellent front-loading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has no output schema, but description does not mention what the returned list contains (e.g., invite details structure). For a list command, missing return value description reduces completeness for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist; schema coverage is 100%. Baseline score of 4 applies because description adds context (resource and scope) beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (list), resource (pending organization invites), and scope (for the current user). It immediately distinguishes from sibling invite-related tools like accept_org_invite and invite_user.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. The scope of 'for the current user' implies it's for personal pending invites, but does not mention that it's read-only or that it should be used before accepting invitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_positionsAInspect
List all job positions/titles in your organization with member counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It explicitly states 'list all' and 'with member counts', making the read-only behavior clear, though no additional behavioral details are given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that efficiently conveys the tool's purpose and output, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and no output schema, the description completely covers what the tool does and what it returns (member counts).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters (100% coverage), so baseline is 4. The description adds no parameter info, but none are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all job positions with member counts, distinguishing it from sibling tools like create_position, delete_position, and update_position which perform different actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly suggests using it to get an overview of positions, but lacks explicit guidance on when not to use it or alternatives, though the simplicity of the tool reduces the need.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_project_membersAInspect
List all members involved in a project. Includes users who have sessions in the project and users from teams linked to the project.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | No | Project ID | |
| project_name | No | Project name (alternative to project_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description adds behavioral context by explaining that members come from both sessions and linked teams. However, it does not disclose read-only nature, authentication requirements, or potential side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using exactly two sentences with no redundant or unclear language. It front-loads the purpose and efficiently adds scope details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with no output schema, the description could mention the return format or typical structure. It covers the scope but lacks details on what information each member entry contains.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters (project_id and project_name). The description adds no additional information about parameter usage, format, or constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all members involved in a project, specifying it includes users from sessions and linked teams. This distinguishes it from sibling tools like list_projects (projects), list_project_teams (only teams), and list_teams (all teams).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. The description implies it should be used to get project members, but does not mention alternatives or contexts where other tools are more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_projectsAInspect
List projects in your organization. No parameters needed — just call it to see all your projects.
| Name | Required | Description | Default |
|---|---|---|---|
| org_id | No | Optional organization ID (uses default org if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It fails to disclose behavioral traits such as pagination, result limits, or the scope when org_id is omitted (though schema says it uses default org). The description is too terse for a complete behavioral picture.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, clearly front-loaded with the purpose, and contains no unnecessary words. It is maximally concise while still being informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and a simple list tool, the description should ideally mention return format or limitations (e.g., pagination). It covers the basic purpose but lacks details about what is returned, making it adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter (org_id) already documented as optional. The description adds minimal extra meaning beyond the schema, just stating no parameters are needed, which matches the optional nature. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List projects in your organization', specifying the verb (list) and resource (projects). It effectively distinguishes from sibling tools like create_project, update_project, and delete_project.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'No parameters needed — just call it to see all your projects', which indicates it requires no mandatory input. However, it does have an optional org_id parameter, and the description doesn't explicitly contrast with sibling list tools or clarify when an org_id might be needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_project_teamsBInspect
List teams linked to a project.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | No | Project ID | |
| project_name | No | Project name (alternative to project_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It only states 'List teams linked to a project' but omits details like whether it is read-only, what happens if the project doesn't exist, or any pagination/limits. This is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no superfluous words. It is front-loaded with the action and resource, making it highly concise and scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool is simple (list with two optional parameters), the description does not mention the output format or any implicit constraints (e.g., requires project existence). Given no output schema, the description could provide more context to ensure correct usage, but it remains minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters ('Project ID' and 'Project name (alternative to project_id)'). The description adds no additional meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and the resource ('teams linked to a project'), distinguishing it from sibling tools like 'list_teams' (all teams) and 'link_team_project' (linking action). It is specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as when to provide project_id vs project_name, or under what conditions the tool is appropriate. The description lacks usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sessionsAInspect
List your recent work sessions. Optionally filter by project name or ID. Shows task type, status, and event count.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results (default 10) | |
| project_id | No | Filter by project ID (optional) | |
| project_name | No | Filter by project name (optional, alternative to project_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavioral traits fully. Only states output fields (task type, status, event count) and filter options. No mention of recency definition, pagination, destructive vs read-only, or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with primary action. Every sentence adds value: main action, optional filters, output fields. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no output schema, description covers purpose, filtering, and output fields. Lacks clarification on sorting order, limit default, or boundary behavior, but sufficient for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 3 parameters. Description only rephrases filtering by project name/ID, adding no new semantic nuance beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'List your recent work sessions' with specific verb and resource. It distinguishes from siblings like 'get_session_details' by indicating it lists multiple sessions with optional filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Says optionally filter by project name or ID, which gives context for when to use filters. Does not explicitly exclude scenarios or name alternatives like 'get_session_details' for single-session detail.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sprintsAInspect
List sprints for a project. Shows sprint number, name, goal, status (current/completed), and task count.
| Name | Required | Description | Default |
|---|---|---|---|
| project_id | No | Project ID (alternative to project_name) | |
| project_name | No | Project name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. The description implies a read-only operation (list), but does not explicitly state that it does not modify data. A more explicit statement would improve trust.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two sentences with no redundant information. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with two optional parameters and no output schema, the description covers the key information: what it returns and for what scope. Minor gaps exist (e.g., behavior when both params provided, ordering, pagination), but overall it is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with both parameters described. The description does not add additional meaning beyond the schema, but provides a useful summary of output fields. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists sprints for a project and specifies the fields returned (number, name, goal, status, task count). It distinguishes from sibling tools like create_sprint or move_task_to_sprint, which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for viewing sprints but does not explicitly state when to use this tool versus alternatives, nor does it provide exclusions or prerequisites. The context of listing versus creating is implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_tasksAInspect
List tasks. Filter by project, status (todo/in_progress/done/cancelled), sprint number, or show only your tasks. Shows title, status, priority, assignees, sprint, and linked sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results (default 20) | |
| status | No | Filter by status: todo, in_progress, done, cancelled (optional) | |
| mine_only | No | Filter tasks assigned to me only (default: false) | |
| project_name | No | Filter by project name (optional) | |
| sprint_number | No | Filter by sprint number (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes listing with filters but lacks details on pagination behavior (though limit parameter exists), ordering, rate limits, or any side effects. This is a significant gap for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core verb and resource. Every sentence adds value: the first states purpose and filters, the second describes output. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so the description compensates by listing the output fields (title, status, priority, assignees, sprint, linked sessions). It does not mention pagination or total count, but for a filtered list tool with a limit parameter, this is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description lists filter options and output fields but does not add meaning beyond the schema's own parameter descriptions. It provides context on what fields are shown but lacks parameter-specific elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List tasks' as the verb-resource pair and enumerates filtering options and output fields. It distinguishes from sibling tools like create_task, update_task, and delete_task, though a sibling search_entities exists but is broader.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing and filtering tasks but does not explicitly state when to use this tool versus alternatives (e.g., search_entities). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_teamsAInspect
List all teams in your organization. Shows team name, member count, project count, and description.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It indicates a read operation with no side effects, but does not explicitly state read-only, authentication needs, or rate limits. The behavior is straightforward, but the description lacks explicit transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, short sentence that front-loads the core purpose. Every word is informative, with no redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description adequately covers what it does and what fields are returned. However, the lack of an output schema means the agent must infer the exact format; the description partially compensates by naming the fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, so the baseline is 4. The description correctly adds no parameter information because none are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all teams in the organization, specifying the fields shown (name, member count, project count, description). It distinguishes itself from create, update, and delete team tools by being a read-only list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing a list of all teams, but does not explicitly differentiate from related tools like get_team_details (for a single team) or list_project_teams (filtered by project). No exclusions or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_activityAInspect
Log an activity or event. If no session_id is provided, a session is automatically created or reused for the project. Just pass project_name and content — no need to start a session first.
| Name | Required | Description | Default |
|---|---|---|---|
| source | No | Source of the event (e.g. 'claude-code', 'cursor', 'user') | |
| content | Yes | Free-form content describing the activity | |
| event_type | Yes | Event type: code_change, file_edit, command_run, decision, note, conversation, discovery | |
| project_id | No | Project ID (optional if project_name is provided) | |
| session_id | No | Session ID (optional — if omitted, a session is auto-created or reused for the project) | |
| project_name | No | Project name (optional, alternative to project_id — used for auto-session) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses automatic session creation/reuse, which is a key behavioral trait. However, it does not specify whether the operation is idempotent, what permissions are needed, or what the response looks like. Some gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, front-loaded with the primary purpose. Every sentence adds value: first defines the tool, second explains the key convenience behavior. Excellent structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and no output schema, the description covers the main usage scenario. It explains auto-session, minimal parameters, and that no prior session is needed. However, it omits what the tool returns or error handling, which might be needed for full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good parameter descriptions. The description adds value by explaining the interplay: 'just pass project_name and content' clarifies minimal required inputs beyond required event_type. This provides meaning beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool logs an activity or event, distinguishing it from session-specific tools like start_session. It specifies the verb 'log' and resource 'activity/event', but does not explicitly differentiate from sibling tool log_trace, which may have a similar purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use this tool (for logging without needing to start a session first) and the automatic session handling. It implies an alternative (manual session management using start_session) but does not explicitly state it. Clear context for use, but no explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_traceBInspect
Log a granular agent action trace. High-frequency, no LLM processing.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Status: started, completed, failed (default: completed) | |
| summary | Yes | Short description of the action | |
| task_id | No | Task ID to link this trace to (optional — auto-links session to task) | |
| metadata | No | Free-form JSON metadata (optional) | |
| trace_id | No | Logical trace group ID (auto-generated if omitted) | |
| project_id | No | Project ID (optional, alternative to project_name) | |
| session_id | No | Session ID (optional — auto-resolved from project if omitted) | |
| trace_type | Yes | Trace type: tool_call, file_read, file_write, code_search, code_edit, api_call, thinking, error | |
| duration_ms | No | Duration in milliseconds (optional) | |
| project_name | No | Project name (optional, used for auto-session resolution) | |
| parent_trace_id | No | Parent trace ID for nesting (e.g. tool_call -> file_reads) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden of behavioral disclosure. It mentions high-frequency and no LLM processing, hinting at performance characteristics, but does not disclose side effects, error behavior, persistence, or required permissions. This is insufficient for a logging tool with 11 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two sentences, front-loading the purpose and a key behavioral trait with no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 11 parameters, no output schema, and no annotations, the description is too terse. It lacks context on how to use parameters effectively, error handling, auto-generation behavior, and typical usage patterns. A more complete description would improve agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so all parameters have descriptions in the input schema. The description adds no additional meaning beyond the schema, keeping the score at the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool logs a granular agent action trace, specifying it is high-frequency and involves no LLM processing. This distinguishes it from other logging tools that may involve LLM reasoning, though it does not explicitly differentiate from the sibling tool 'log_traces_batch'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for high-frequency logging without LLM overhead, but does not specify when to choose this over alternatives like 'log_traces_batch' or 'log_activity'. No explicit when-not-to-use or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_traces_batchCInspect
Log multiple traces in a single batch.
| Name | Required | Description | Default |
|---|---|---|---|
| traces | Yes | Array of traces to log in a single batch |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description lacks any behavioral details such as error handling, idempotency, rate limits, or side effects. The burden is on the description, which is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence. It is concise but could benefit from additional structure or details without excess.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema or annotations, the description should provide more context about return values, failure modes, or when to batch. It is incomplete and leaves the agent with insufficient information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description does not need to add parameter details. The description adds no value beyond the schema, earning a baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool logs multiple traces in a single batch, distinguishing it from the sibling tool 'log_trace' which logs a single trace. However, it does not specify what types of traces or provide additional context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this batch tool versus the singular 'log_trace'. The agent has no criteria for choosing one over the other.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mark_notification_readAInspect
Mark a notification as read, or mark all notifications as read at once.
| Name | Required | Description | Default |
|---|---|---|---|
| mark_all | No | Set to true to mark ALL notifications as read | |
| notification_id | No | Notification ID to mark as read (omit to use mark_all) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It only states the action without disclosing side effects, idempotency, or permission requirements. For a mutation tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that captures the two modes of operation. It is concise and front-loaded with the action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With only two parameters and no output schema, the description is functional but could be more complete by mentioning the result of the operation or error conditions. It is minimally adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add meaning beyond the parameter descriptions already in the schema. The overall description merely restates the parameter options, so no added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'mark' and resource 'notification', and distinguishes between marking a single notification and all notifications at once. It differentiates from sibling tools like list_notifications, which only list, ensuring no confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly tells when to use: when you need to mark a notification as read individually or in bulk. It does not explicitly state alternatives or when not to use, but given no other marking tool exists among siblings, the usage is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
move_task_to_sprintAInspect
Move a task to a different sprint. Specify the target sprint by number or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | Task ID to move | |
| sprint_id | No | Target sprint ID (alternative to sprint_number) | |
| project_name | No | Project name (needed if using sprint_number) | |
| sprint_number | No | Target sprint number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It only says 'Move a task' which implies mutation but does not mention return value, error conditions, side effects (e.g., if task is already in a sprint), or any constraints. This is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short, clear sentences with no extraneous information. Every word contributes meaning, making it efficient for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 4 parameters and no output schema or annotations, the description is too brief. It lacks information on how sprint_number and project_name interact, whether a task must already be in a sprint, and what the response contains. Given the tool's complexity, this is inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all 4 parameters with descriptions (100% coverage). The description adds 'Specify the target sprint by number or ID', which is already inferable from the schema. It does not clarify the relationship between sprint_number and project_name, nor provide format details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool moves a task to a different sprint, using the verb 'move' and resource 'task'. This distinguishes it from siblings like assign_task (assigns to user), update_task (modifies fields), and create_task (creates new).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies two methods to target a sprint (by number or ID), implying usage when you have either identifier. However, it does not explicitly state when to use this tool versus alternatives like update_task (which might also affect sprint), nor does it mention prerequisites like needing project_name when using sprint_number.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_task_dependencyAInspect
Remove a dependency between tasks. This unblocks the task from the other task.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | The task that was blocked | |
| depends_on_task_id | Yes | The task that was blocking it |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description only states the basic action without disclosing side effects, prerequisites, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise, one sentence with verb and object front-loaded, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple tool with two parameters and no output schema, but lacks detail on success/error conditions and typical usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'remove' and the resource 'dependency between tasks', and distinguishes from sibling 'add_task_dependency'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for removing dependencies but does not explicitly state when to use vs alternatives or provide exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_team_memberCInspect
Remove a member from a team. Find the team by name or ID, and the user by name or email.
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | User name or email to remove | |
| team_id | No | Team ID (optional if team_name is provided) | |
| team_name | No | Team name (optional if team_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description only implies a destructive action by saying 'Remove'. It does not disclose side effects, authorization requirements, or error conditions. The behavioral impact on the team or user is not described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, one sentence long, with no redundant information. It front-loads the action and method.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks information about return values, error handling, or confirmation of removal. For a destructive operation, more context is desirable to help the agent understand the outcome.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already described. The description adds that the team can be found by name or ID and user by name or email, but this mostly restates the schema. It provides minimal additional value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action of removing a member from a team, and explains how to identify the team and user. It distinguishes this tool from siblings like add_team_member or delete_team, though it could be more precise about the exact nature of removal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as delete_team or remove_user_from_org. The description does not mention prerequisites or conditions that should be met before calling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_user_from_orgAInspect
Remove a user from the current organization without deleting their account. Admin/owner only.
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | User name or email to remove from the organization |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description clarifies the user account is not deleted, which is helpful. However, lacks details on side effects (e.g., removal from teams/projects) or re-invitation possibility. For a privileged action, more transparency would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: action and condition. No filler. Front-loaded with the operation and key differentiator.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple structure (1 required param, no output schema, no annotations), description covers the essential: what it does, what it doesn't do (no account deletion), and who can use it. Could mention what response to expect, but not critical for this action.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'user' with schema description. The tool description doesn't add further semantics beyond the schema, which already covers 100% of parameters. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource: 'Remove a user from the current organization'. Distinguishes from sibling 'delete_user' by specifying 'without deleting their account'. Also states admin/owner restriction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'Admin/owner only', providing a clear prerequisite. Implicitly differentiated from 'delete_user' and 'invite_user' through description, but lacks explicit when-not or alternative references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_entitiesBInspect
Search entities in the knowledge graph by name. Finds people, companies, documents, concepts, tools, etc. that have been mentioned in your activities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results (default 10) | |
| query | Yes | Search query string (matches entity names) | |
| entity_type | No | Entity type filter: person, company, document, concept, tool, template, process |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context about searching by name and scoping to the user's activities, but lacks details on match behavior (case sensitivity, partial matching), ordering, pagination, or what happens with no results. Since no annotations are provided, more behavioral disclosure would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the core action and immediately providing examples. Every word serves a purpose with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with no output schema, the description should hint at result format or behavior. It gives a good overview of scope and entity types but misses details on matching and ordering, leaving gaps for an agent to infer correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all three parameters. The description adds marginal value by clarifying that 'query' matches entity names and listing example types, but does not substantially enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches entities in a knowledge graph by name, listing example types. It distinguishes from sibling tools by using 'search' versus 'list' verbs, but does not explicitly differentiate from 'find_documents' which is a more specific search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'find_documents' or various list tools. It only states what it finds, not when it is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_context_routingAInspect
Set the context routing mode for the organization. Requires admin role. Options: 'keyword_llm' (balanced), 'keyword_only' (fastest), 'llm_only' (most accurate).
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | Routing mode: 'keyword_llm' (keyword + LLM fallback), 'keyword_only' (fastest), or 'llm_only' (most accurate) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so the description must carry all behavioral disclosure. It states admin requirement and mode options but does not mention side effects (e.g., immediate effect, session impact, rollback possibility). This is a gap for a configuration-changing tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Purpose, requirement, and options are presented directly and succinctly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema), the description covers purpose, role, and option semantics. Missing a note on return value (e.g., success indication) but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% and the description repeats the same enum values (keyword_llm, keyword_only, llm_only) with identical explanations. No added meaning beyond the input schema, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Set the context routing mode for the organization' with specific verb and resource. It distinguishes the three modes with trade-offs (balanced, fastest, most accurate), differentiating from sibling tools like get_context_routing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly requires admin role, which is a key usage condition. Does not provide when-to-use vs alternatives, but given the tool's specificity, it is clear this is for configuring routing, not for reading (handled by get_context_routing).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
setup_organizationAInspect
Set up your organization in one step (admin only). Use this for onboarding.
| Name | Required | Description | Default |
|---|---|---|---|
| links | No | Links: [{"label": "...", "url": "...", "type": "website|repository|documentation|slack|jira|notion|figma|other"}] | |
| teams | No | Teams to create: [{"name": "...", "description": "..."}] | |
| projects | No | Projects to create: [{"name": "...", "description": "..."}] | |
| objectives | Yes | Organization objectives | |
| description | Yes | Organization description | |
| business_model | Yes | Business model description | |
| custom_context | No | Custom context for AI agents (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It states 'admin only' and 'in one step', implying broad scope. However, it does not detail what actions the tool performs (e.g., creating teams, projects, links) or warn about destructive behavior or idempotency. The schema partially fills this gap but the description lacks explicit behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two short sentences with no unnecessary words. It front-loads the core purpose and usage context, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 7 parameters including nested objects, no output schema, and no annotations. The description only provides a high-level purpose ('set up your organization') and a usage hint ('onboarding'). It does not explain what the tool returns, whether it is idempotent, or what prerequisites exist (e.g., whether the org must be empty). This is insufficient for a complex setup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema parameter descriptions cover 100% of the 7 parameters, each with clear descriptions (e.g., 'Teams to create', 'Organization description'). The tool description adds no additional parameter-level meaning beyond what the schema already provides, so the score is at baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Set up your organization in one step'. It specifies the verb 'setup' and the resource 'organization'. It also mentions 'admin only' and 'onboarding', which differentiate it from sibling tools that handle individual operations like create_team or create_project.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use this for onboarding', giving a clear usage context. It also notes 'admin only', indicating required privileges. However, it doesn't explicitly mention when not to use this tool or point to alternatives like individual creation tools for incremental changes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
start_sessionAInspect
Start a work session in a project. If you already have an active session in the same project, it will be reused instead of creating a new one. Provide the project name (or ID) and task type (coding, review, planning, debugging, research, meeting, other).
| Name | Required | Description | Default |
|---|---|---|---|
| source | No | Source client identifier (e.g. 'claude-code', 'claude-web', 'chatgpt', 'cursor'). Helps track which tool created the session. | |
| task_type | Yes | Type of task: coding, review, planning, debugging, research, meeting, other | |
| project_id | No | Project ID to start the session in (optional if project_name provided) | |
| project_name | No | Project name (optional, alternative to project_id) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must cover behavioral traits. It mentions idempotency (reuse), but lacks details on side effects, authorization needs, or what happens if the project does not exist. Some gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, well-structured, and front-loaded with the verb and purpose. No unnecessary words, though it could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not mention return values. It also omits error conditions (e.g., invalid project). Adequate but not fully comprehensive for a session-starting tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description restates that project_name/project_id and task_type are needed. It adds the idea that project_name and project_id are alternatives, but does not mention the source parameter. Value beyond schema is moderate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Start a work session in a project.' It distinguishes from siblings like end_session (which ends a session) and get_session_details (which retrieves details).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the reuse behavior: if an active session exists in the same project, it will be reused. This gives implicit guidance on when to use the tool, though it does not explicitly state when not to use it or list alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unassign_taskBInspect
Remove a user from a task's assignees.
| Name | Required | Description | Default |
|---|---|---|---|
| task_id | Yes | Task ID | |
| user_id | Yes | User ID to remove from assignees |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits like side effects, permissions required, or behavior if the user is not assigned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no wasted words, though it could benefit from slightly more structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two required parameters and no output schema, the description adequately conveys the core action, but lacks details on preconditions or consequences.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for both parameters; the description adds no additional meaning beyond the schema, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Remove' and clearly states the resource 'user from a task's assignees', distinguishing it from sibling tools like 'assign_task'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as removing a team member or deleting a task.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unlink_team_projectAInspect
Unlink a project from a team. Find both by name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID (optional if team_name is provided) | |
| team_name | No | Team name (optional if team_id is provided) | |
| project_id | No | Project ID (optional if project_name is provided) | |
| project_name | No | Project name (optional if project_id is provided) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description lacks details on side effects, what happens if the link doesn't exist, or permissions required. The term 'unlink' implies deletion but no further context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: two sentences with no unnecessary words. Front-loaded with verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple mutation tool with optional parameters and no output schema, the description could mention behavior when resources are not found or consequences. Currently minimal but adequate for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with per-parameter descriptions. The description adds value by clarifying the pairing of ID/name parameters, which is not obvious from individual schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Unlink a project from a team') and how to identify resources ('Find both by name or ID'). It is specific and distinguishes from the sibling tool 'link_team_project'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives, no prerequisites or context for usage. Only mentions identification method.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_orgAInspect
Update your organization's details (owner only). Set name, description, business model, objectives, custom context for AI, or links.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | Organization name (optional) | |
| links | No | Links as JSON array: [{"label": "...", "url": "...", "type": "..."}] (optional) | |
| objectives | No | Objectives as a list of strings (optional) | |
| description | No | Organization description (optional) | |
| business_model | No | Business model description (optional) | |
| custom_context | No | Custom context for AI agents (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It mentions the update action and owner restriction but lacks details on side effects, required permissions beyond ownership, rate limits, or error cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with no fluff, delivering essential information upfront.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 optional parameters and no output schema, the description covers the core function but omits return value, error handling, and behavioral nuances. It is adequate for a simple update tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description lists the parameters (name, description, etc.) but does not add deeper meaning or format details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update'), the resource ('your organization's details'), and a key constraint ('owner only'), distinguishing it from related tools like 'setup_organization' (create) and other update tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies that only the owner should use this tool, but does not explicitly state when to avoid it or mention alternatives, leaving usage guidance somewhat implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_positionAInspect
Update a position's title, description, or permissions (admin only). Find by title or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| new_title | No | New title (optional) | |
| position_id | No | Position ID (optional if position_title is provided) | |
| position_title | No | Position title to find (optional if position_id is provided) | |
| new_description | No | New description (optional) | |
| new_permissions | No | New permissions JSON object mapping permission keys to booleans, e.g. {"can_view_all_projects": true}. Use list_positions to see all available keys. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description mentions mutation ('Update') and admin restriction, but does not disclose side effects, error handling, or permissions beyond admin.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence front-loading action and key constraints; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main points (update fields, admin constraint, identification), but lacks mention of return value or partial update behavior; acceptable given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds value beyond schema by clarifying mutual exclusivity of position_id and position_title, and explaining permissions usage with reference to list_positions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Update' (verb) and 'position' (resource), specifies updatable fields (title, description, permissions), and distinguishes from siblings like create_position and delete_position.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Indicates 'admin only' for usage context and 'Find by title or ID' for identification method, but lacks explicit when-not-to-use or alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_projectAInspect
Update a project's name, description, or links. Find by name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| links | No | Links as JSON array: [{"label": "...", "url": "...", "type": "..."}] (optional) | |
| new_name | No | New name (optional) | |
| project_id | No | Project ID (optional if project_name is provided) | |
| project_name | No | Project name to find (optional if project_id is provided) | |
| new_description | No | New description (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only states 'Update' without mentioning permissions, idempotency, side effects, or error handling. For a mutation tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the core purpose, and contains no redundant information. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers what fields can be updated and how to identify the project, it lacks details about return values, behavior when parameters are omitted, and error scenarios. Given no output schema, more context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, and the tool description adds little extra meaning beyond listing the updatable fields. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and the resource ('a project'), and specifies the fields that can be updated ('name, description, or links'). This distinguishes it from sibling tools like create_project, delete_project, or update_team.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: 'Find by name or ID' indicates how to identify the project. However, there is no explicit advice on when to use this tool versus alternatives (e.g., update_task, update_team), nor any when-not conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_taskAInspect
Update a task's status, title, description, priority, or team assignment. Use this to move tasks through the workflow (todo -> in_progress -> done) or reassign to a team.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | New tags (replaces existing tags) | |
| title | No | New title | |
| due_at | No | Due date in ISO 8601 format (e.g. '2026-03-15T00:00:00Z'), or null to clear | |
| status | No | New status: todo, in_progress, done, cancelled | |
| task_id | Yes | Task ID to update | |
| team_id | No | Team ID to assign (optional) | |
| priority | No | New priority: low, medium, high, urgent | |
| team_name | No | Team name to assign (optional, alternative to team_id) | |
| description | No | New description |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It accurately describes the update action but omits details about permissions, destructive potential, or return values. It adds workflow context but does not mention all schema fields (e.g., tags, due_at).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with no redundant information. It is front-loaded with the action and purpose, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the core use case but omits some updatable fields and does not describe success responses or side effects. Given the tool's 9 parameters and lack of output schema, more detail would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds minimal value. It lists a subset of parameters without explaining their semantics beyond the schema. This meets the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: updating a task's status, title, description, priority, or team assignment. It uses a specific verb ('update') and identifies the resource ('task'), distinguishing it from siblings like create_task or delete_task.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage scenarios: moving tasks through the workflow or reassigning to a team. This gives context for when to use the tool, though it does not explicitly exclude alternative tools for other actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_teamBInspect
Update a team's name, description, or objectives. Find by name or ID.
| Name | Required | Description | Default |
|---|---|---|---|
| team_id | No | Team ID (optional if team_name is provided) | |
| new_name | No | New name (optional) | |
| team_name | No | Team name to find (optional if team_id is provided) | |
| new_objectives | No | New objectives (optional) | |
| new_description | No | New description (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must fully convey behavior. It only says 'Update' implying mutation, but does not disclose required permissions, whether updates are partial or full, reversibility, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence with no redundancy. Front-loaded verb and resource. Every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 5 optional parameters, the description is too sparse. It does not mention return value, error conditions, prerequisites, or behavior when no fields are updated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for each parameter, so baseline is 3. Description adds 'Find by name or ID' clarifying the dual identification, but does not elaborate on parameter format or constraints beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'Update' specific resource 'team' and attributes (name, description, objectives) with finding by name or ID. Distinguishes from sibling tools like create_team and delete_team.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives such as create_team, update_project, or get_team_details. Description lacks context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_userAInspect
Update an existing user's name, role, positions, or skills (admin/owner only).
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | User name or email to find | |
| skills | No | Skills list (optional — replaces existing) | |
| new_name | No | New name (optional) | |
| new_role | No | New role: member or admin (optional) | |
| positions | No | Position titles to assign (optional — replaces existing) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses permission requirement but does not explain update behavior such as whether omitted fields are left unchanged or cleared. Does not mention return value, error handling, or idempotency. Partial transparency beyond schema descriptions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with 10 words, front-loaded with verb and fields, followed by permission note. No wasted words, highly efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters, no output schema, and no annotations, description lacks details on return value, error conditions, and whether update is partial or full replacement. Minimally adequate but incomplete for a tool of moderate complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for all 5 parameters, so baseline is 3. Description lists the fields (name, role, positions, skills) but adds no extra meaning beyond the schema's own descriptions. Does not clarify replace semantics or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Update', resource 'existing user', and specific fields (name, role, positions, skills), with permission requirement 'admin/owner only'. Distinct from sibling update tools for other entities and user management tools like delete_user or invite_user.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly notes that tool is for admin/owner users, providing a clear usage constraint. However, does not mention when not to use (e.g., for other user actions like deletion) or suggest alternative tools. Lacks exclusion criteria but provides sufficient context for typical use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_documentAInspect
Upload a document to the organization's knowledge base. Documents can be templates, policies, contracts, procedures, guides, or references. They are indexed in the knowledge graph and automatically surfaced as context when relevant to a task. Supports both text and binary files: for binary files (PDFs, images, etc.), set is_base64=true and provide the content as a base64-encoded string, along with the appropriate mime_type (e.g. 'application/pdf', 'image/png'). Optionally set file_name for the original filename. Embeddings are generated automatically for search.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Tags for categorization and search (e.g. ['tax', 'contract', 'template']) | |
| title | Yes | Document title | |
| content | Yes | Full document content (text/markdown) | |
| category | Yes | Category: template, policy, procedure, reference, contract, guide, checklist, other | |
| team_ids | No | Team IDs to associate with (optional, alternative to team_names) | |
| file_name | No | Original file name (e.g. 'contract.pdf'). Optional. | |
| is_base64 | No | Set to true if content is base64-encoded (for binary files like PDFs, images) | |
| mime_type | No | MIME type of the content (default: text/plain). Use application/pdf, image/png, etc. for binary files. | |
| project_id | No | Project ID to associate with (optional, alternative to project_name) | |
| team_names | No | Team names to associate the document with (optional, for team-level access control) | |
| project_name | No | Project name to associate with (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description discloses key behaviors: documents are indexed, surfaced as context, support for binary files via base64, and automatic embedding generation. It does not mention authorization, rate limits, or idempotency, but covers the main functional traits well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured paragraph that front-loads the main action ('Upload a document...') and then adds crucial details in a logical order. Every sentence provides necessary information without redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (11 parameters, no output schema), the description covers core usage but lacks explanation of return values (e.g., document ID) and the relationship between alternative parameters like team_ids vs team_names. It also doesn't clarify whether parameters combine or are mutually exclusive. Some gaps remain for a fully self-contained description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, giving a baseline of 3. The description adds value by explaining the overall purpose, the handling of binary files (is_base64 and mime_type), and the fact that embeddings are generated. It enriches parameter meaning beyond the schema's individual descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Upload a document to the organization's knowledge base' and specifies the types of documents (templates, policies, etc.), distinguishing it from sibling tools like find_documents, get_document, delete_document. The verb 'upload' combined with the resource and additional details make the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but does not explicitly state when to use it versus alternatives (e.g., update_document, which doesn't exist, or other creation tools). It implicitly conveys that this is for adding new documents, but lacks explicit exclusions or comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whoamiAInspect
Get your profile, organization, projects, recent sessions, and top entities. This is the best starting point — call this first to understand what's available. No parameters needed.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It clearly indicates a read operation ('Get your...') and notes no parameters are needed, but does not disclose potential auth requirements or rate limits. However, the tool's nature (user profile) makes the behavioral traits obvious.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. The first sentence states the action and scope, the second provides usage guidance. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with no parameters and no output schema, the description fully covers what it returns and why to use it. It also positions itself among many sibling tools, making it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and 100% schema description coverage. The description redundantly states 'No parameters needed,' which adds no additional meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Get your profile, organization, projects, recent sessions, and top entities' with specific verb and resources. It distinguishes itself from siblings by noting it's the best starting point.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use guidance: 'This is the best starting point — call this first to understand what's available.' Also states 'No parameters needed,' eliminating any parameter-related uncertainty.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!