Skip to main content
Glama

Server Details

A Model Context Protocol (MCP) server for Selise Blocks Cloud integration

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

36 tools
activate_social_loginBInspect

Activate social login for the project by updating authentication configuration.

Args: item_id: Configuration item ID (default: "682c40c3872fab1bc2cc8988") project_key: Project key (tenant ID). Uses global tenant_id if not provided refresh_token_minutes: Refresh token validity in minutes (default: 300) access_token_minutes: Access token validity in minutes (default: 15) remember_me_minutes: Remember me token validity in minutes (default: 43200) allowed_grant_types: List of allowed grant types (default: ["password", "refresh_token", "social"]) wrong_attempts_lock: Number of wrong attempts to lock account (default: 5) lock_duration_minutes: Account lock duration in minutes (default: 5)

Returns: JSON string with social login activation result

ParametersJSON Schema
NameRequiredDescriptionDefault
item_idNo682c40c3872fab1bc2cc8988
project_keyNo
allowed_grant_typesNo
remember_me_minutesNo
wrong_attempts_lockNo
access_token_minutesNo
lock_duration_minutesNo
refresh_token_minutesNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states this updates configuration (implying mutation) but doesn't disclose behavioral traits like required permissions, whether changes are reversible, rate limits, or error conditions. The description is minimal beyond basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structured with a purpose sentence and parameter list, but could be more front-loaded. The parameter explanations are detailed but necessary given low schema coverage. Some redundancy exists (e.g., repeating 'minutes' in param names and descriptions), but overall it's functional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters with 0% schema coverage and no annotations, the description does well on parameters but lacks behavioral context. An output schema exists, so return values needn't be explained. However, for a mutation tool, more on permissions, side effects, or error handling would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides clear semantics for all 8 parameters, explaining each (e.g., 'refresh_token_minutes: Refresh token validity in minutes'), including defaults and context like 'Uses global tenant_id if not provided'. This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('activate social login') and resource ('for the project'), specifying it updates authentication configuration. It distinguishes from siblings like 'get_authentication_config' (read vs. write) and 'enable_authenticator_mfa' (different auth feature), though not explicitly named.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'get_authentication_config' (for reading config) or 'enable_authenticator_mfa' (for other auth methods). It mentions using global tenant_id if project_key not provided, but lacks context on prerequisites or typical workflows.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

add_sso_credentialBInspect

Add social login credentials for OAuth providers (Google, Facebook, GitHub, etc.).

Args: provider: OAuth provider name (e.g., "google", "facebook", "github") client_id: OAuth client ID from provider console client_secret: OAuth client secret from provider console project_key: Project key (tenant ID). Uses global tenant_id if not provided is_enable: Whether to enable this SSO provider (default: True) redirect_uri: OAuth redirect URI (optional)

Returns: JSON string with SSO credential save result

ParametersJSON Schema
NameRequiredDescriptionDefault
audienceNohttp://localhost:3000
providerYes
client_idYes
is_enableNo
project_keyNo
redirect_uriNohttp://localhost:3000/login
client_secretYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an 'add' operation (implying creation/mutation) and mentions the return format, but doesn't disclose permissions needed, whether it's idempotent, rate limits, error conditions, or what happens if credentials already exist. The description adds some context but leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (description, Args, Returns). The description sentence is efficient, and parameter explanations are appropriately detailed. Minor room for improvement in front-loading more critical information, but overall well-organized with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, no annotations, and an output schema exists, the description is moderately complete. It covers parameter semantics well and mentions the return format, but lacks behavioral context for a mutation tool (permissions, side effects, error handling). The existence of an output schema reduces the need to explain return values, but more operational guidance would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides meaningful explanations for all 7 parameters beyond just their names, including examples for 'provider', source guidance for 'client_id' and 'client_secret', default values for 'is_enable' and 'project_key', and optionality for 'redirect_uri'. The description effectively adds semantic value that the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Add social login credentials for OAuth providers' with specific examples (Google, Facebook, GitHub). It uses a specific verb ('add') and identifies the resource ('social login credentials'), but doesn't explicitly differentiate from sibling tools like 'activate_social_login' or 'get_authentication_config'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, when not to use it, or how it relates to sibling tools like 'activate_social_login' or 'get_authentication_config'. Usage context is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_blocks_cliBInspect

Check if Blocks CLI is installed and available.

Returns: JSON string with CLI availability status

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a JSON string with availability status, which is helpful, but lacks details like whether it performs network checks, requires specific permissions, has side effects, or handles errors. For a diagnostic tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of two sentences that directly state the purpose and return value. Every sentence earns its place by providing essential information without redundancy or fluff, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, output schema exists), the description is somewhat complete but has gaps. It explains the return format, but without annotations, it misses behavioral context like error handling or side effects. The output schema likely covers return values, so the description doesn't need to detail them further, but overall completeness is minimal.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't add parameter details since there are none, which is appropriate. A baseline of 4 is applied for tools with zero parameters, as there's nothing to compensate for.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check if Blocks CLI is installed and available.' It specifies the verb ('Check') and resource ('Blocks CLI'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'install_blocks_cli', which is a related but distinct operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., use before installation attempts), exclusions, or related tools like 'install_blocks_cli'. Without such context, an agent must infer usage based on the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

configure_blocks_data_gatewayCInspect

Configure Blocks Data Gateway for GraphQL operations.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided connectionString: Connection string for the database databaseName: Name of the database

Returns: JSON string with data gateway configuration result

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo
database_nameNo
use_blocks_dbNo
connection_stringNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a configuration operation but doesn't mention whether this is a one-time setup, whether it overwrites existing configurations, what permissions are required, or potential side effects. The description adds minimal behavioral context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The Args and Returns sections are structured clearly. However, the parameter descriptions could be more concise, and the 'Uses global tenant_id if not provided' note is somewhat buried in the parameter details rather than being in a more prominent position.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that this is a configuration tool with 4 parameters (0% schema coverage) and an output schema exists, the description is moderately complete. It covers the basic purpose and some parameters but lacks important behavioral context about what 'configure' actually entails, especially since no annotations provide safety or mutation hints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions three parameters (project_key, connectionString, databaseName) but the schema has four parameters (including use_blocks_db). The description provides some semantic context for the three mentioned parameters but omits the fourth entirely, leaving significant gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Configure Blocks Data Gateway for GraphQL operations.' This specifies both the action (configure) and the target resource (Blocks Data Gateway). However, it doesn't differentiate from sibling tools like 'get_blocks_data_gateway_config' beyond the obvious read vs. write distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it's clearly a configuration tool, there's no mention of prerequisites, when this should be called versus other configuration tools, or what state the system should be in before invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_moduleCInspect

Create a new module for translation in a project.

Args: module_name: Name of the module to create project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with module creation result including module ID and name

ParametersJSON Schema
NameRequiredDescriptionDefault
module_nameYes
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a creation operation, implying mutation, but lacks details on permissions, side effects, error handling, or rate limits. It mentions a return format, but this is redundant given the output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections for Args and Returns, and sentences are direct without fluff. However, the return statement is somewhat redundant since an output schema exists, slightly reducing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a mutation tool with no annotations, 0% schema coverage, but an output schema, the description is minimally adequate. It covers the basic purpose and parameters but lacks behavioral context like auth needs or error cases, making it incomplete for safe agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains 'module_name' as 'Name of the module to create' and 'project_key' with its default behavior, adding meaningful context beyond the bare schema. However, it doesn't cover constraints like length or format, leaving gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new module') and resource ('for translation in a project'), which is specific and actionable. It distinguishes from siblings like 'create_project' or 'create_schema' by specifying the module's purpose for translation, though it doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing project), exclusions, or comparisons to sibling tools like 'get_translation_modules' or 'save_module_keys_with_translations', leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_permissionBInspect

Create a new permission.

Args: name: Permission name description: Permission description resource: Resource name (arbitrary string) resource_group: Resource group name (arbitrary string) tags: List of action tags (e.g., ["create", "read", "update", "delete"]) project_key: Project key (tenant ID). Uses global tenant_id if not provided type: Permission type (default: 3 for "Data protection") dependent_permissions: List of dependent permission IDs (default: []) is_built_in: Whether it's a built-in permission (default: false)

Returns: JSON string with permission creation result

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
tagsYes
typeNo
resourceYes
descriptionYes
is_built_inNo
project_keyNo
resource_groupYes
dependent_permissionsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool creates a permission and returns a JSON result, but lacks critical details: whether this is a mutating operation (implied but not explicit), what permissions or roles are required to invoke it, if there are rate limits, or how errors are handled. The description is minimal and doesn't compensate for the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a brief purpose statement followed by organized 'Args' and 'Returns' sections. Each sentence earns its place by clarifying parameters or outcomes. It's appropriately sized for a tool with 9 parameters, though the initial purpose line could be slightly more informative (e.g., mentioning the system context).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, mutation operation) and lack of annotations, the description is moderately complete. It excels in parameter semantics but lacks behavioral context (e.g., auth needs, side effects). The presence of an output schema means it doesn't need to detail return values, but overall, it's adequate with clear gaps in usage and transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate, which it does effectively. It provides clear semantics for all 9 parameters, including examples (e.g., tags like ['create', 'read', 'update', 'delete']), defaults (type: 3, is_built_in: false), and contextual explanations (project_key uses global tenant_id if not provided). This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Create a new permission' with a specific verb and resource. It distinguishes from siblings like 'update_permission' and 'list_permissions' by focusing on creation rather than modification or listing. However, it doesn't explicitly contrast with 'set_role_permissions' or explain what a 'permission' is in this system.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'update_permission' or 'set_role_permissions'. It doesn't mention prerequisites, such as needing existing resources or projects, or typical workflows where permission creation occurs. The agent must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_projectBInspect

Create a new project in Selise Cloud.

Args: project_name: Name of the project to create repo_name: Repository name (e.g., 'username/repo') repo_link: Full GitHub repository URL repo_id: Repository ID from GitHub or Git provider is_production: Whether this is a production environment (default: False)

Returns: JSON string with project creation results

ParametersJSON Schema
NameRequiredDescriptionDefault
repo_idNoAny
repo_linkYes
repo_nameYes
project_nameYes
is_productionNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It states it creates a project but doesn't disclose behavioral traits like required permissions, whether this is an idempotent operation, what happens on failure, rate limits, or authentication needs. The description mentions a JSON return but provides no details about success/error structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns) and efficiently explains the tool's purpose and parameters. Every sentence earns its place, though the 'Returns' statement could be more specific about the JSON structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 5 parameters with 0% schema coverage, but an output schema exists, the description is moderately complete. It covers parameter semantics well but lacks behavioral context for a mutation tool. The output schema reduces the need to describe return values, but more operational guidance would help.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides clear semantic explanations for all 5 parameters beyond their titles, including examples ('username/repo'), defaults ('default: False'), and clarifications ('Repository ID from GitHub or Git provider'). This adds substantial value over the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a new project in Selise Cloud, specifying the verb ('create') and resource ('project'). It distinguishes from sibling tools like 'get_projects' (read vs. write), but doesn't explicitly differentiate from other creation tools like 'create_module' or 'create_schema' beyond the resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While it's implied for project creation scenarios, there's no mention of prerequisites, when not to use it, or how it relates to sibling tools like 'get_projects' or other creation tools in the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_roleBInspect

Create a new role.

Args: name: Role name description: Role description slug: Role slug (URL-friendly identifier) project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with role creation result

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
slugYes
descriptionYes
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'Create a new role' which implies a write/mutation operation, but doesn't mention permissions required, whether the operation is idempotent, rate limits, or what happens on conflicts (e.g., duplicate slugs). This leaves significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The Args/Returns sections are structured but slightly verbose; every sentence adds value, though 'JSON string with role creation result' could be more specific (e.g., mentioning the returned role object).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a mutation tool with no annotations, 4 parameters (3 required), 0% schema description coverage, and an output schema exists, the description is moderately complete. It covers parameter semantics well but lacks behavioral context (e.g., permissions, errors) and doesn't leverage the output schema to clarify return values beyond 'JSON string', leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for all parameters beyond the schema, which has 0% description coverage. It explains that 'slug' is a 'URL-friendly identifier' and that 'project_key' defaults to 'global tenant_id if not provided'. This compensates well for the schema's lack of descriptions, though it could elaborate on format constraints (e.g., slug uniqueness).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Create') and resource ('a new role'), making it immediately understandable. However, it doesn't differentiate this from sibling tools like 'list_roles' or 'update_permission' beyond the basic action, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing admin permissions), when not to use it (e.g., if a role already exists), or refer to sibling tools like 'list_roles' for checking existing roles or 'update_permission' for modifying roles.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_schemaCInspect

Create a new schema in Selise Blocks GraphQL API.

Args: schema_name: Name of the schema to create project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with schema creation result

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo
schema_nameYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states this is a creation operation, implying mutation, but doesn't disclose behavioral traits like required permissions, whether it's idempotent, error handling, or side effects. The mention of 'global tenant_id' as a fallback for project_key adds some context, but overall lacks critical details for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences: purpose, args, and returns. It's front-loaded with the core action, and each sentence adds value without redundancy. The structure is clear, though the 'Args' and 'Returns' sections could be integrated more smoothly into prose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a mutation tool with no annotations, 2 parameters (1 required), 0% schema coverage, and an output schema, the description is moderately complete. It covers the basic purpose and parameters but lacks behavioral details and usage context. The output schema reduces the need to explain return values, but more guidance on errors or dependencies would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that 'schema_name' is the name to create and 'project_key' is a tenant ID with a fallback to global tenant_id, adding meaning beyond the schema's basic titles. However, it doesn't cover constraints like format, length, or validation rules, leaving gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create a new schema') and resource ('in Selise Blocks GraphQL API'), making the purpose unambiguous. It distinguishes from siblings like 'get_schema', 'list_schemas', 'finalize_schema', and 'update_schema_fields' by focusing on creation. However, it doesn't explicitly differentiate from 'create_module' or 'create_project' in terms of resource hierarchy or dependencies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., whether a project must exist first), exclusions, or comparisons to siblings like 'finalize_schema' or 'update_schema_fields'. The only implied context is that it's for GraphQL API schemas, but no usage rules are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enable_authenticator_mfaBInspect

Enable Authenticator Multi-Factor Authentication for a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with Authenticator MFA configuration result

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an 'enable' operation (implying a write/mutation) but doesn't disclose any behavioral traits: no information about required permissions, whether this is reversible, what side effects occur, rate limits, or what happens if MFA is already enabled. The description adds minimal behavioral context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three clear sections: purpose statement, parameter explanation, and return value note. Each sentence serves a distinct purpose with minimal redundancy. The structure is logical and front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a mutation tool with no annotations and an output schema exists (so return values are documented elsewhere), the description provides adequate but minimal context. It covers the basic action and parameter semantics but lacks important behavioral context about permissions, side effects, and differentiation from sibling tools that would be valuable for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 1 parameter and 0% schema description coverage, the description compensates well by explaining the parameter's purpose ('Project key (tenant ID)') and its default behavior ('Uses global tenant_id if not provided'). This adds meaningful semantic context beyond what the bare schema provides, though it doesn't explain format or validation requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Enable') and target resource ('Authenticator Multi-Factor Authentication for a project'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from the sibling tool 'enable_email_mfa' which appears to be a similar MFA-enabling tool for a different method, missing an opportunity for clear sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'enable_email_mfa' or other authentication configuration tools. It mentions the parameter default behavior but offers no context about prerequisites, appropriate scenarios, or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

enable_email_mfaCInspect

Enable Email Multi-Factor Authentication for a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with Email MFA configuration result

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the action ('Enable') but doesn't describe what this entails—e.g., whether it requires admin permissions, if it affects existing users, what the configuration result includes, or any side effects like email notifications. This leaves significant gaps for an agent to understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the purpose stated first in a clear sentence. The additional details about arguments and returns are structured but could be slightly more concise, as the 'Args' and 'Returns' sections are somewhat redundant with the schema and output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a configuration action with one parameter) and the presence of an output schema (which handles return values), the description is minimally adequate. However, with no annotations and low schema coverage, it lacks details on behavioral aspects like permissions or effects, making it incomplete for safe and effective use by an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It adds minimal value by explaining that 'project_key' is a 'Project key (tenant ID)' and that it 'Uses global tenant_id if not provided', which clarifies the parameter's purpose and default behavior. However, this is basic and doesn't fully compensate for the lack of schema descriptions, keeping it at the baseline level.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Enable') and resource ('Email Multi-Factor Authentication for a project'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from the sibling tool 'enable_authenticator_mfa', which appears to be a related MFA method, so it doesn't reach the highest score for sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'enable_authenticator_mfa' or other authentication-related tools. The description only states what it does without context about prerequisites, dependencies, or typical scenarios for enabling email MFA.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

finalize_schemaCInspect

Finalize schema changes by retrieving updated schema (step 3 of schema field management).

Args: schema_id: The ID of the schema to finalize project_short_key: Project short key. project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with finalized schema data

ParametersJSON Schema
NameRequiredDescriptionDefault
schema_idYes
project_keyNo
project_short_keyYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool 'finalizes' and 'retrieves', suggesting a read-write operation, but doesn't disclose behavioral traits like whether this is destructive, requires specific permissions, has side effects, or involves rate limits. The description adds minimal context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The Args and Returns sections are structured clearly, though the note about 'project_key' could be more integrated. Overall, it's efficient with little waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no annotations, schema coverage 0%), the description is moderately complete. It explains the purpose, lists parameters, and notes the return format. However, with an output schema present, it doesn't need to detail return values, but behavioral aspects and parameter semantics remain under-specified for a mutation-like tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists three parameters in the Args section but provides minimal semantics: 'schema_id' and 'project_short_key' are only named, while 'project_key' gets a brief note about default behavior. This adds some value but doesn't fully explain parameter meanings or usage, leaving gaps for undocumented parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Finalize schema changes by retrieving updated schema' with the specific context of 'step 3 of schema field management'. It uses a specific verb ('finalize') and resource ('schema changes'), though it doesn't explicitly differentiate from sibling tools like 'update_schema_fields' or 'create_schema'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance: it mentions this is 'step 3 of schema field management', implying a sequence, but doesn't specify when to use this tool versus alternatives like 'update_schema_fields' or 'get_schema'. No explicit when-not-to-use or prerequisite information is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_authentication_configCInspect

Get the current authentication configuration for the project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with current authentication configuration

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Get[s]' data, implying a read-only operation, but doesn't clarify permissions required, rate limits, or what happens if the project_key is invalid. The description adds minimal behavioral context beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the main purpose stated first. The 'Args' and 'Returns' sections are structured clearly, though they could be integrated more seamlessly. There's minimal waste, but the formatting as separate sections slightly reduces flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter with low schema coverage (0%) and an output schema exists (so return values are documented elsewhere), the description is moderately complete. It covers the basic purpose and parameter usage but lacks behavioral details and usage guidelines, which are important for a tool in a complex authentication context with many siblings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds some semantics for the single parameter: it explains that 'project_key' is a 'Project key (tenant ID)' and notes it 'Uses global tenant_id if not provided.' However, with 0% schema description coverage, the schema provides no parameter details. The description compensates partially but doesn't fully explain the parameter's format or implications, leaving gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the current authentication configuration for the project.' It specifies the verb ('Get') and resource ('authentication configuration'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'get_auth_status' or 'get_global_state', which might also retrieve authentication-related information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_auth_status' or 'get_global_state', nor does it specify prerequisites or contexts where this tool is preferred. The only implied usage is retrieving authentication configuration, but without comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_auth_statusBInspect

Check current authentication status and token validity.

Returns: JSON string with authentication status

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool 'returns a JSON string with authentication status,' which adds some context about the output format. However, it doesn't describe other behavioral traits, such as whether it requires authentication to call, potential error conditions, or side effects (e.g., token refresh). For a tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of only two sentences that directly state the tool's purpose and return value. There is no wasted text, and every sentence earns its place by providing essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has no parameters, annotations, or complex schema, and an output schema exists, the description is minimally adequate. It explains what the tool does and the return format, but it lacks context about when to use it, behavioral details, or integration with sibling tools. For a simple diagnostic tool, this might suffice, but it leaves gaps in usage guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the input schema has 100% description coverage (though empty). The description doesn't need to add parameter semantics, so it meets the baseline expectation. No additional information is required or provided, which is appropriate for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check current authentication status and token validity.' It specifies the verb ('check') and resource ('authentication status and token validity'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate itself from sibling tools like 'get_authentication_config', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, context, or exclusions, such as whether it should be used before other authentication-related tools or in response to specific errors. With many sibling tools in the authentication and configuration space, this lack of differentiation is a significant gap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_blocks_data_gateway_configCInspect

Get Blocks Data Gateway configuration.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with data gateway configuration result

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral context. It mentions the return format ('JSON string') but doesn't disclose important traits like whether this requires authentication, has rate limits, affects system state, or what happens when project_key isn't provided beyond the schema's default value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by Args and Returns sections. While concise, the Args section could be more integrated with the main description rather than appearing as separate documentation blocks.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return value documentation) and only one parameter with partial semantic coverage in the description, the description is minimally adequate. However, for a configuration retrieval tool with no annotations, it should provide more context about authentication requirements, error conditions, and relationship to sibling configuration tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter by explaining that 'project_key' is a 'tenant ID' and that it 'Uses global tenant_id if not provided' - information not present in the schema (which has 0% description coverage). This partially compensates for the schema gap, though more detail about the global tenant_id behavior would be helpful.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('Blocks Data Gateway configuration'), making it immediately understandable. However, it doesn't differentiate from sibling tools like 'configure_blocks_data_gateway' or explain what distinguishes this read operation from configuration operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives. The description doesn't mention sibling tools like 'configure_blocks_data_gateway' for write operations or 'get_global_state' for broader system information, leaving the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_global_stateBInspect

Get the current global state including authentication and application domain.

Returns: JSON string with current global state

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool returns a JSON string but doesn't disclose behavioral traits such as whether it's read-only, requires authentication, has rate limits, or what happens on errors. This is inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that are front-loaded: the first states the purpose, and the second specifies the return format. There is no wasted text, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, an output schema exists, and no annotations, the description is minimally complete. It covers the purpose and return format but lacks behavioral context and usage guidelines, which are needed for full understanding despite the low complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter information is needed. The description doesn't add semantics beyond the schema, but this is acceptable as there are no parameters to document, warranting a baseline score of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('current global state'), specifying it includes authentication and application domain. However, it doesn't explicitly differentiate from sibling tools like 'get_authentication_config' or 'get_auth_status', which might overlap in scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With siblings like 'get_authentication_config' and 'get_auth_status', the description lacks context on use cases, prerequisites, or exclusions, leaving the agent to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_module_keysBInspect

Get available keys in a specific module for translation in a project.

Args: module_id: The ID of the module to get keys from project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with available keys including key names, IDs, and resources

ParametersJSON Schema
NameRequiredDescriptionDefault
module_idYes
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves data ('Get'), implying a read-only operation, but does not address permissions, rate limits, side effects, or error handling. The description lacks details on what 'available keys' means in practice, such as whether they are filtered or paginated, making it insufficient for a mutation-heavy context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first. The 'Args' and 'Returns' sections are structured clearly, though they could be more integrated. There is no wasted text, but the formatting as separate sections slightly reduces flow, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but with an output schema), the description is reasonably complete. It explains the purpose, parameters, and return value. The output schema exists, so detailed return explanations are not needed. However, it lacks behavioral context like error cases or usage scenarios, which holds it back from a score of 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds meaning by explaining that 'module_id' is for a specific module and 'project_key' is a tenant ID with a default to global tenant_id. However, it does not fully cover both parameters' semantics, such as format or constraints, leaving gaps. With two parameters and low schema coverage, this is a minimal but adequate explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get available keys in a specific module for translation in a project.' It specifies the verb ('Get'), resource ('available keys'), and context ('in a specific module for translation in a project'). However, it does not explicitly differentiate from sibling tools like 'get_translation_modules' or 'save_module_keys_with_translations', which prevents a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions retrieving keys for translation but does not specify scenarios, prerequisites, or exclusions. For example, it does not clarify if this should be used before 'save_module_keys_with_translations' or in relation to 'get_translation_modules', leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_projectsBInspect

Get projects from Selise Blocks API and extract application domains.

Args: tenant_group_id: Tenant Group ID to filter projects (optional) page: Page number for pagination (default: 0) page_size: Number of items per page (default: 100)

Returns: JSON string with projects data and extracted application domains

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
page_sizeNo
tenant_group_idNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'extract application domains' as part of the operation, which adds some context beyond a simple fetch. However, it lacks details on permissions, rate limits, error handling, or whether this is a read-only operation (implied by 'Get' but not explicit). The description doesn't contradict annotations, as none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by parameter and return details. It uses bullet-like formatting for clarity. However, the 'Args' and 'Returns' sections could be integrated more seamlessly, and some redundancy exists (e.g., repeating 'page' details).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no annotations, but with an output schema), the description is fairly complete. It covers purpose, parameters, and return format. The output schema existence means return values don't need explanation, but more behavioral context (e.g., pagination behavior, error cases) would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides clear semantics for all three parameters: 'tenant_group_id' (filtering), 'page' (pagination number), and 'page_size' (items per page), including defaults. This adds significant value beyond the bare schema, though it doesn't explain format constraints (e.g., string format for tenant_group_id).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get projects from Selise Blocks API and extract application domains.' It specifies the verb ('Get'), resource ('projects'), and an additional operation ('extract application domains'). However, it doesn't explicitly differentiate this from sibling tools like 'create_project' or 'list_schemas' beyond the extraction aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context (e.g., when extraction is needed), or comparisons to sibling tools like 'get_resource_groups' or 'list_schemas'. Usage is implied only by the tool's name and purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_resource_groupsBInspect

Get available resource groups for a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with resource groups result

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves data ('Get') and mentions a return format ('JSON string'), but lacks details on permissions, rate limits, error handling, or whether it's read-only or has side effects. This leaves significant gaps for an agent to understand operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first, followed by brief parameter and return details. There is minimal waste, though the structure could be slightly more streamlined by integrating the Args and Returns into a single cohesive sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and the presence of an output schema (which handles return values), the description is reasonably complete. It covers the basic purpose, parameter semantics, and return format, though it lacks behavioral context and usage guidelines, which are minor gaps in this simple context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds some meaning beyond the input schema by explaining that 'project_key' is a 'Project key (tenant ID)' and has a default behavior ('Uses global tenant_id if not provided'), which clarifies its optional nature and usage. However, with 0% schema description coverage and only 1 parameter, it compensates moderately but does not fully detail constraints or examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('available resource groups for a project'), making it easy to understand what the tool does. However, it does not explicitly differentiate from sibling tools like 'get_projects' or 'get_global_state', which might also retrieve project-related data, leaving some ambiguity about uniqueness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as sibling tools like 'get_projects' or 'get_global_state'. It mentions a default behavior for the project_key parameter but does not specify contexts, prerequisites, or exclusions for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_role_permissionsBInspect

Get permissions assigned to specific role(s).

Args: role_slugs: List of role slugs to filter by project_key: Project key (tenant ID). Uses global tenant_id if not provided page: Page number (default: 0) page_size: Number of items per page (default: 10) search: Search filter (default: "") is_built_in: Filter by built-in status (default: "") resource_group: Filter by resource group (default: "")

Returns: JSON string with role permissions result

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
searchNo
page_sizeNo
role_slugsYes
is_built_inNo
project_keyNo
resource_groupNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns 'JSON string with role permissions result,' which hints at a read-only operation, but doesn't clarify authentication requirements, rate limits, pagination behavior, error conditions, or whether it's safe to invoke. For a tool with 7 parameters and no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with a clear purpose statement, followed by organized parameter documentation and return information. Each sentence serves a specific function without redundancy. However, the 'Args' and 'Returns' sections could be integrated more seamlessly into the flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no annotations, but with an output schema), the description is moderately complete. It covers parameters well but lacks behavioral context (e.g., authentication, pagination). The output schema existence means the description doesn't need to detail return values, but overall completeness is adequate with clear gaps in usage guidance and transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description includes an 'Args' section that documents all 7 parameters with brief explanations, adding meaningful context beyond the input schema (which has 0% description coverage). For example, it clarifies that 'project_key' uses 'global tenant_id if not provided' and provides defaults for optional parameters. This compensates well for the schema's lack of descriptions, though some parameter details (like format of 'role_slugs') remain implicit.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get permissions assigned to specific role(s).' It specifies the verb ('Get') and resource ('permissions assigned to specific role(s)'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'list_permissions' or 'set_role_permissions', which would require more specific comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'list_permissions' or 'set_role_permissions', nor does it specify prerequisites, appropriate contexts, or exclusions. The agent must infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_schemaAInspect

Get a schema's current fields using its ID (step 1 of schema field management).

Args: schema_id: The ID of the schema to retrieve project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with schema fields and metadata

ParametersJSON Schema
NameRequiredDescriptionDefault
schema_idYes
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool retrieves data ('Get'), implying a read-only operation, but doesn't disclose behavioral traits such as authentication needs, rate limits, error handling, or whether it's idempotent. The description adds minimal context beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, with the core purpose in the first sentence, followed by clear sections for 'Args' and 'Returns.' Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, 1 required), no annotations, and the presence of an output schema (which handles return value documentation), the description is fairly complete. It covers purpose, parameters, and return format. However, it lacks details on error cases or behavioral constraints, which could enhance completeness for a read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful semantics for both parameters: 'schema_id' is explained as 'The ID of the schema to retrieve,' and 'project_key' is clarified with 'Project key (tenant ID). Uses global tenant_id if not provided.' This compensates for the 0% schema description coverage, providing essential context beyond the input schema's basic titles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a schema's current fields using its ID.' It specifies the verb ('Get'), resource ('schema's current fields'), and method ('using its ID'). However, it doesn't explicitly differentiate from sibling tools like 'list_schemas' or 'update_schema_fields,' which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by labeling it as 'step 1 of schema field management,' suggesting it's part of a workflow. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'list_schemas' or 'update_schema_fields,' nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_translation_languagesBInspect

Get available languages for translation in a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with available languages including language names, codes, and default status

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a JSON string with language details, which is basic output information. However, it lacks critical behavioral traits such as whether this is a read-only operation, potential rate limits, authentication requirements, or error handling. The description is minimal and does not compensate for the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with a clear purpose statement followed by brief sections for arguments and returns. Each sentence serves a purpose without redundancy. However, the 'Args' and 'Returns' sections could be integrated more smoothly, and there is minor room for improvement in flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is an output schema (which likely details the JSON structure), the description does not need to explain return values extensively. However, with no annotations and minimal behavioral context, the description is adequate but leaves gaps. It covers the basic purpose and parameter semantics but lacks usage guidelines and deeper behavioral insights, making it only minimally viable for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter 'project_key', explaining that it defaults to a global tenant_id if not provided. This is valuable semantic information beyond the schema, which has 0% description coverage and only lists the parameter name and type. Since there is only one parameter and the description clarifies its optional nature and default behavior, it effectively compensates for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get available languages for translation in a project.' It specifies the verb ('Get') and resource ('available languages for translation'), making the intent unambiguous. However, it does not explicitly differentiate from sibling tools like 'get_translation_modules' or 'publish_translation', which reduces clarity in context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions that 'project_key' uses a global default if not provided, but this is parameter semantics, not usage context. There is no indication of prerequisites, when to prefer this over similar tools, or any exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_translation_modulesBInspect

Get available modules for translation in a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with available modules including module names and IDs

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves data ('Get'), implying a read-only operation, but does not specify if it requires authentication, has rate limits, or what happens if the project key is invalid. The return format is mentioned, but behavioral traits like error handling or data freshness are omitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with a clear purpose statement followed by parameter and return details in separate sections. Each sentence adds value without redundancy. However, the 'Args' and 'Returns' labels could be integrated more smoothly, slightly affecting flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there is an output schema (though not shown), the description need not detail return values, which it handles by stating the format. However, as a read operation with no annotations and minimal sibling differentiation, it lacks context on prerequisites and behavioral nuances, making it adequate but incomplete for optimal agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter: it explains that 'project_key' is a 'Project key (tenant ID)' and specifies a fallback behavior ('Uses global tenant_id if not provided'). Since schema description coverage is 0% and there is only one parameter, this adequately compensates, providing clear semantics beyond the basic schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get available modules for translation in a project.' It specifies the verb ('Get') and resource ('modules for translation'), making it easy to understand. However, it does not explicitly differentiate from sibling tools like 'get_module_keys' or 'create_module', which could cause confusion about scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions a project key parameter but does not explain prerequisites, such as needing an existing project or how it relates to other translation tools like 'get_translation_languages' or 'publish_translation'. This lack of context leaves usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

install_blocks_cliBInspect

Install Blocks CLI using npm.

Returns: JSON string with installation result

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the action ('Install') and return format ('JSON string'), but fails to disclose critical behavioral traits like whether this requires admin permissions, if it's idempotent, potential side effects, or error handling, which are essential for a tool that modifies the system.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded, with two sentences that directly state the action and return value without unnecessary details. However, it could be slightly more structured by explicitly separating installation steps from output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, annotations, but an output schema exists, the description covers the basic purpose and return format. However, for a tool that performs installation (a mutation), it lacks details on behavioral context like permissions or system impact, making it minimally adequate but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description does not add parameter semantics, but this is acceptable given the lack of parameters, aligning with the baseline for zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Install') and target ('Blocks CLI using npm'), which is specific and unambiguous. However, it does not differentiate from sibling tools like 'check_blocks_cli', which might be related but serves a different purpose, so it misses full sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, such as 'check_blocks_cli' or other installation-related tools. It lacks context on prerequisites, timing, or exclusions, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_captcha_configsBInspect

List all CAPTCHA configurations for a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with list of CAPTCHA configurations

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool lists configurations and returns a JSON string, which covers basic output. However, it lacks details on permissions required, rate limits, pagination, error handling, or whether the list includes all configurations or only active ones. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose clearly, followed by structured 'Args' and 'Returns' sections. There's no wasted text, and each part adds value. It could be slightly more concise by integrating the parameter explanation into the main flow, but overall it's efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and the presence of an output schema (which handles return value details), the description is reasonably complete. It covers the purpose, parameter semantics, and return format. However, it lacks behavioral context like authentication needs or error cases, which holds it back from a perfect score, especially with no annotations to fill those gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds some semantic context: it explains that 'project_key' is a tenant ID and defaults to a global value if not provided, which clarifies usage beyond the schema's title ('Project Key') and default (''). With 0% schema description coverage and 1 parameter, this compensates partially, but it doesn't detail format constraints or examples. The baseline is adjusted upward from 3 due to the low coverage, but not fully to 4 as the explanation remains brief.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all CAPTCHA configurations for a project.' It specifies the verb ('List'), resource ('CAPTCHA configurations'), and scope ('for a project'), which is specific and actionable. However, it doesn't explicitly differentiate from siblings like 'save_captcha_config' or 'update_captcha_status' beyond implying a read-only vs. write operation, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It mentions that 'project_key' uses a global default if not provided, which hints at when to omit the parameter, but offers no explicit advice on when to use this tool versus alternatives like 'get_authentication_config' or other configuration-related siblings. There's no mention of prerequisites, such as needing an existing project, or exclusions for when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_permissionsBInspect

List all permissions for a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided page: Page number (default: 0) page_size: Number of items per page (default: 10) search: Search filter (default: "") sort_by: Field to sort by (default: "Name") sort_descending: Sort order (default: false) is_built_in: Filter by built-in status (default: "") resource_group: Filter by resource group (default: "")

Returns: JSON string with permission list result

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
searchNo
sort_byNoName
page_sizeNo
is_built_inNo
project_keyNo
resource_groupNo
sort_descendingNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions pagination, filtering, and sorting parameters, but doesn't describe authentication requirements, rate limits, error handling, or what happens if the project_key is invalid. For a tool with 8 parameters and no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with a clear purpose statement, followed by detailed parameter explanations and return information. Every sentence adds value, with no wasted words. It could be slightly more concise by integrating defaults into the parameter list more tightly, but overall it's efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, but with an output schema), the description is fairly complete. It explains all parameters in detail and notes the return format ('JSON string with permission list result'), which aligns with the output schema. However, it lacks behavioral context like error cases or performance considerations, which would be helpful for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds substantial meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose, default values, and usage (e.g., 'Uses global tenant_id if not provided' for project_key, and default values for others like page, search, sort_by). This fully compensates for the lack of schema descriptions, making parameter semantics clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all permissions for a project.' It specifies the verb ('List') and resource ('permissions for a project'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'get_role_permissions' or 'update_permission', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_role_permissions' or 'update_permission', nor does it specify prerequisites or contexts for usage. The only implied usage is for listing permissions, but no explicit guidelines are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_rolesBInspect

List all roles for a project.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided page: Page number (default: 0) page_size: Number of items per page (default: 10) search: Search filter (default: "") sort_by: Field to sort by (default: "Name") sort_descending: Sort order (default: false)

Returns: JSON string with role list result

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo
searchNo
sort_byNoName
page_sizeNo
project_keyNo
sort_descendingNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions pagination and sorting behavior, which is helpful, but lacks details on permissions required, rate limits, error handling, or whether it's a read-only operation (implied by 'List' but not explicit). This leaves significant gaps for a tool with 6 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by parameter and return sections. It's appropriately sized, but the 'Args' and 'Returns' headings could be more integrated into natural language, and some details like the JSON return format are slightly redundant given the output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations, but with an output schema), the description is fairly complete. It covers parameter semantics thoroughly and mentions the return format, though it could benefit from more behavioral context like authentication or error handling to achieve a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds substantial meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose, defaults, and semantics (e.g., 'project_key: Project key (tenant ID). Uses global tenant_id if not provided'), effectively compensating for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'List all roles for a project' with a specific verb ('List') and resource ('roles'), and it specifies the scope ('for a project'). However, it doesn't explicitly differentiate from sibling tools like 'get_role_permissions' or 'create_role', which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_role_permissions' for detailed role information or 'create_role' for adding roles, nor does it specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_schemasBInspect

List schemas from Selise Blocks GraphQL API.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided keyword: Search keyword for filtering schemas page_size: Number of items per page (default: 100) page_number: Page number for pagination (default: 1) sort_descending: Sort in descending order (default: True) sort_by: Field to sort by (default: "CreatedDate")

Returns: JSON string with schemas listing result

ParametersJSON Schema
NameRequiredDescriptionDefault
keywordNo
sort_byNoCreatedDate
page_sizeNo
page_numberNo
project_keyNo
sort_descendingNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions pagination and sorting defaults, which is helpful, but doesn't disclose critical behavioral traits like authentication requirements, rate limits, error conditions, or what happens when project_key is omitted (it says 'Uses global tenant_id' but doesn't explain what that means operationally). For a tool with no annotations, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by parameter details and return information. Every sentence adds value, though the 'Args:' and 'Returns:' formatting is slightly verbose. It's appropriately sized for a tool with 6 parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters with 0% schema coverage and no annotations, the description does a good job explaining parameters and mentioning the return format. Since there's an output schema, it doesn't need to detail return values. However, it could better address authentication, error handling, and sibling tool differentiation to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides clear explanations for all 6 parameters, including defaults and usage notes (e.g., 'Uses global tenant_id if not provided' for project_key). This adds substantial value beyond the bare schema, though it could benefit from more context about valid values or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('schemas from Selise Blocks GraphQL API'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_schema' (singular) or 'create_schema', which could cause confusion about when to use this list operation versus retrieving a specific schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'get_schema' (for retrieving a single schema) and 'create_schema' (for creating new schemas), there's no indication of when listing schemas is appropriate versus fetching a specific one or creating a new one.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

publish_translationCInspect

Publish translations for a project, making them live and available for use.

This function publishes all translation keys and their translations for the specified project, making them available in the production environment.

Args: project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with publish operation result including success/failure status

ParametersJSON Schema
NameRequiredDescriptionDefault
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool makes translations 'live and available for use in the production environment,' implying a mutation with side effects. However, it lacks details on permissions required, whether the operation is reversible, potential rate limits, or error conditions. For a mutation tool with zero annotation coverage, this is insufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by elaboration and parameter/return details. It avoids redundancy, though the 'Args' and 'Returns' sections could be integrated more seamlessly. Every sentence adds value, making it efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a mutation with production impact), lack of annotations, and presence of an output schema, the description is moderately complete. It covers purpose and basic parameter semantics but misses key behavioral aspects like permissions or reversibility. The output schema likely handles return values, reducing the need for description there, but overall gaps remain for safe agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds some meaning beyond the input schema: it explains that 'project_key' is a 'Project key (tenant ID)' and specifies a default behavior ('Uses global tenant_id if not provided'). However, with 0% schema description coverage and only 1 parameter, the description compensates partially but doesn't fully clarify the parameter's format or implications (e.g., what a 'tenant ID' entails). Baseline is 4 for 0 parameters, but here there is 1 parameter, so 3 reflects moderate added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Publish translations for a project, making them live and available for use.' It specifies the verb ('publish') and resource ('translations for a project'), and distinguishes it from sibling tools like 'get_translation_languages' or 'get_translation_modules' by indicating a write/mutation action. However, it doesn't explicitly differentiate from potential similar tools like 'save_module_keys_with_translations' beyond the scope of 'publishing' vs 'saving'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions that it 'publishes all translation keys and their translations,' but doesn't specify prerequisites (e.g., translations must be saved first), exclusions (e.g., not for drafts), or direct alternatives among siblings. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_captcha_configBInspect

Save CAPTCHA configuration for Google reCAPTCHA or hCaptcha.

Args: provider: CAPTCHA provider - "recaptcha" for Google reCAPTCHA or "hcaptcha" for hCaptcha site_key: Public site key from CAPTCHA provider console secret_key: Private secret key from CAPTCHA provider console project_key: Project key (tenant ID). Uses global tenant_id if not provided is_enable: Whether to enable the configuration immediately (default: False)

Returns: JSON string with CAPTCHA configuration save result

ParametersJSON Schema
NameRequiredDescriptionDefault
providerYes
site_keyYes
is_enableNo
secret_keyYes
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions the tool saves configurations and returns a JSON result, but doesn't disclose whether this is a create vs. update operation, permission requirements, error conditions, or side effects. The description doesn't contradict annotations (none exist), but fails to provide adequate behavioral context for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and well-structured with a clear purpose statement followed by organized parameter documentation. The Args/Returns sections are helpful, though the 'Returns' statement could be more specific about the JSON structure. No wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with 5 parameters and no annotations, the description does well on parameter semantics but lacks behavioral context. The existence of an output schema reduces the need to describe return values, but more information about the operation's nature (create/update), permissions, and error handling would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing clear semantic explanations for all 5 parameters. Each parameter gets specific guidance: provider options are enumerated, keys are sourced from provider consoles, project_key defaults to global tenant_id, and is_enable has a default value. This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Save CAPTCHA configuration') and specifies the resource types (Google reCAPTCHA or hCaptcha). It distinguishes from sibling tools like 'list_captcha_configs' and 'update_captcha_status' by focusing on saving new configurations rather than listing or updating status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. While it's clear this saves configurations, there's no mention of prerequisites, when to use it instead of 'update_captcha_status', or what happens if a configuration already exists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_module_keys_with_translationsBInspect

Save multiple translation keys with their translations to modules in a project.

Args: request: SaveKeysTranslationRequest object containing: - ProjectKey: Project key (tenant ID). Uses global tenant_id if not provided - Translations: List of SaveKeyTranslationRequest objects, each containing: - KeyName: The translation key name - ModuleId: The module ID - ItemId: The Key ID (existing itemId on update else empty string on creation) - IsNewKey: On creation of new key it's True and for update it will be False - Resources: List of KeyTranslationResource objects with Value and Culture

Returns: JSON string with batch translation keys creation result including status and results for each key

ParametersJSON Schema
NameRequiredDescriptionDefault
requestYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the tool performs both creation and update operations, but doesn't address critical behavioral aspects like required permissions, whether this is a destructive/mutating operation, error handling, or rate limits. The return format is mentioned but without details on what 'status' and 'results' contain.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized but not optimally structured. The first sentence clearly states the purpose, but the parameter documentation is embedded in the description rather than separated. Some sentences could be more efficient, such as combining the creation/update explanation for IsNewKey.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex mutation tool with 0% schema description coverage but with an output schema, the description provides adequate parameter semantics but lacks behavioral context. It explains what the tool does and the parameter structure, but doesn't address permissions, side effects, or error conditions that would be crucial for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining the nested structure of the single 'request' parameter, detailing the SaveKeysTranslationRequest object and its components including ProjectKey fallback behavior, Translations array structure, and the meaning of ItemId and IsNewKey fields. However, it doesn't explain the Culture field values or provide examples of valid inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Save multiple translation keys with their translations') and target ('to modules in a project'), providing a specific verb+resource combination. However, it doesn't explicitly distinguish this tool from sibling tools like 'get_module_keys' or 'publish_translation', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_module_keys' for reading or 'publish_translation' for publishing. It mentions creation vs update behavior in the parameter documentation, but offers no explicit when-to-use or when-not-to-use context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_application_domainCInspect

Manually set the application domain and tenant ID for repository creation.

Args: domain: Application domain URL tenant_id: Tenant ID for the project project_name: Project name (optional) tenant_group_id: Tenant Group ID (optional)

Returns: JSON string with confirmation

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYes
tenant_idYes
project_nameNo
tenant_group_idNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states this is a 'manual set' operation, implying mutation, but doesn't describe effects (e.g., whether this overrides existing settings, requires specific permissions, or has side effects). It mentions 'repository creation' but doesn't explain the relationship or if this is a prerequisite step. No rate limits, error conditions, or confirmation details are provided beyond the return statement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a structured Args/Returns section. Every sentence adds value, with no redundant information. It could be slightly more concise by integrating the Args into the main text, but it's efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage and no annotations, the description partially compensates by listing parameters and stating a JSON return. However, it lacks details on mutation behavior, error handling, and integration with sibling tools. The output schema exists, so return values needn't be explained, but the description doesn't address the tool's role in the broader context of repository creation workflows.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists all 4 parameters with brief explanations (e.g., 'Application domain URL' for domain), adding meaning beyond the schema's generic titles. However, it doesn't provide format examples, constraints, or how parameters interact (e.g., if tenant_group_id relates to tenant_id). The optional parameters are noted but not elaborated.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Manually set the application domain and tenant ID for repository creation.' It specifies the action (set), resources (application domain and tenant ID), and context (for repository creation). However, it doesn't explicitly differentiate from sibling tools like 'create_project' or 'configure_blocks_data_gateway' that might relate to project setup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance: it mentions the tool is for 'repository creation' context, but offers no explicit when-to-use rules, prerequisites, or alternatives. It doesn't clarify if this should be used before other tools like 'create_project' or how it interacts with siblings such as 'get_projects' or 'get_global_state'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_role_permissionsCInspect

Assign or remove permissions from a role.

Args: role_slug: Role slug identifier add_permissions: List of permission IDs to add to the role (default: []) remove_permissions: List of permission IDs to remove from the role (default: []) project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with role permission assignment result

ParametersJSON Schema
NameRequiredDescriptionDefault
role_slugYes
project_keyNo
add_permissionsNo
remove_permissionsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool modifies permissions (implying mutation) but lacks critical details: required permissions/authorization, whether changes are reversible, rate limits, error conditions, or side effects. The return format is mentioned but not elaborated. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. The purpose is stated upfront, followed by a clear 'Args' and 'Returns' section. Every sentence adds value, though the 'Returns' line could be more specific. No redundant information is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with 4 parameters), lack of annotations, and presence of an output schema, the description is partially complete. It covers parameters and return format at a high level but misses behavioral context (e.g., auth needs, error handling). The output schema existence reduces the need to detail return values, but overall completeness is moderate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists all 4 parameters with brief explanations, clarifying that 'add_permissions' and 'remove_permissions' default to empty lists and 'project_key' uses a global tenant if omitted. However, it doesn't explain parameter formats (e.g., what permission IDs look like) or interactions, leaving gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Assign or remove permissions from a role.' It specifies the verb ('assign or remove') and resource ('permissions from a role'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'create_role' or 'update_permission', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., existing roles/permissions), exclusions, or comparisons to sibling tools like 'create_role' or 'update_permission'. The agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_captcha_statusBInspect

Enable or disable a CAPTCHA configuration.

Args: item_id: The ID of the CAPTCHA configuration to update is_enable: True to enable, False to disable the configuration project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with status update result

ParametersJSON Schema
NameRequiredDescriptionDefault
item_idYes
is_enableYes
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is an update/enable/disable operation but doesn't mention required permissions, whether changes are reversible, rate limits, or what happens if the configuration doesn't exist. The description adds minimal behavioral context beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. The Args and Returns sections are clearly organized. Every sentence adds value with no wasted words, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations, the description is moderately complete. It explains the purpose and parameters well, and the presence of an output schema means return values don't need explanation. However, it lacks important behavioral context like permissions, error conditions, and relationships to sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides clear semantic meaning for all three parameters: 'item_id' identifies the configuration, 'is_enable' controls enable/disable state, and 'project_key' specifies the tenant with a default behavior. This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Enable or disable a CAPTCHA configuration.' This is a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'save_captcha_config' or 'list_captcha_configs' beyond the update/enable/disable action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing CAPTCHA configuration), when not to use it, or how it differs from related tools like 'save_captcha_config' or 'list_captcha_configs'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_permissionBInspect

Update an existing permission.

Args: item_id: The ID of the permission to update name: Permission name description: Permission description resource: Resource name (arbitrary string) resource_group: Resource group name (arbitrary string) tags: List of action tags (e.g., ["create", "read", "update", "delete"]) project_key: Project key (tenant ID). Uses global tenant_id if not provided type: Permission type (default: 3 for "Data protection") dependent_permissions: List of dependent permission IDs (default: []) is_built_in: Whether it's a built-in permission (default: false)

Returns: JSON string with permission update result

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
tagsYes
typeNo
item_idYes
resourceYes
descriptionYes
is_built_inNo
project_keyNo
resource_groupYes
dependent_permissionsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It states it's an update operation (implying mutation) and mentions a JSON return, but doesn't cover critical aspects like authentication needs, side effects, error handling, or rate limits. The agent must infer behavior from the tool name alone.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a brief purpose statement followed by organized Args and Returns sections. Every sentence adds value, though the initial line is somewhat redundant with the tool name. It efficiently conveys parameter details without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 10 parameters with 0% schema coverage and no annotations, the description does well on parameters but lacks broader context. The presence of an output schema reduces the need to explain returns, but gaps remain in usage guidelines and behavioral transparency, making it adequate but incomplete for a mutation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description compensates by detailing all 10 parameters with clear semantics, including defaults and examples (e.g., tags: ['create', 'read', 'update', 'delete'], type default: 3). This adds significant value beyond the bare schema, though some nuances like 'arbitrary string' for resource/resource_group could be more precise.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Update') and resource ('an existing permission'), making the purpose unambiguous. However, it doesn't differentiate from sibling tools like 'create_permission' or 'list_permissions' beyond the basic verb distinction, missing specific scope or constraint comparisons.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'create_permission' or 'list_permissions'. It lacks context about prerequisites (e.g., needing an existing permission ID), typical workflows, or error conditions, leaving usage entirely implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_schema_fieldsBInspect

Update schema fields (step 2 of schema field management).

Args: schema_id: The ID of the schema to update fields: List of SchemaField objects for the schema (existing + new). Each SchemaField has Name (str), Type (str), and IsArray (bool) properties. Reserved fields (ItemId, CreatedDate, LastUpdatedDate, CreatedBy, Language, LastUpdatedBy, OrganizationIds, Tags) are automatically filtered out. project_key: Project key (tenant ID). Uses global tenant_id if not provided

Returns: JSON string with update result

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsYes
schema_idYes
project_keyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers limited behavioral insight. It mentions that reserved fields are 'automatically filtered out', which is useful context about the tool's behavior. However, it lacks details on permissions needed, error handling, or side effects, which are critical for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement, organized Args and Returns sections, and no redundant information. It could be slightly more concise by integrating the reserved fields note into the main text, but it's generally efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with 3 parameters) and no annotations, the description provides basic purpose and parameter details but lacks behavioral context like permissions or error cases. The output schema exists, so return values needn't be explained, but overall completeness is adequate with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It effectively explains the 'fields' parameter by detailing SchemaField properties and noting reserved fields are filtered out. The 'schema_id' and 'project_key' are mentioned but with minimal elaboration, though the overall param guidance is strong given the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update schema fields') and specifies it's 'step 2 of schema field management', which helps distinguish it from sibling tools like 'create_schema' or 'finalize_schema'. However, it doesn't explicitly differentiate from 'update_permission' or other update operations, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance, mentioning it's 'step 2 of schema field management' which implies a sequence but doesn't specify when to use this tool versus alternatives like 'create_schema' or 'finalize_schema'. No explicit when/when-not instructions or prerequisites are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources