Cirra AI Salesforce Admin MCP Server
Server Details
Comprehensive Salesforce administration and data management capabilities
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
40 toolscirra_ai_initARead-onlyIdempotentInspect
IMPORTANT: call cirra_ai_init before calling any other tools of the Cirra AI MCP server. You MUST carefully read the instructions returned from this tool call before proceeding
| Name | Required | Description | Default |
|---|---|---|---|
| scope | No | (optional) Controls whether the provided `cirra_ai_team` and/or `sf_user` apply only to this chat (`thread`) or are saved as new defaults for future (`global`). Only use 'thread' if explicitly instructed by the user. Leave blank if not switching team or connection. | global |
| sf_user | No | (optional) A Salesforce username identifying the Salesforce connection to use. Omit this to use the current default connection. | |
| cirra_ai_team | No | (optional) The Cirra AI team to use. Use the literal value `__Personal__` to select the user's personal account. Omit this to use the previously selected default team (if any). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable context about this being a session initialization tool that returns instructions, which helps the agent understand this is a setup/configuration step rather than a data operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with only two sentences, both carrying critical information. The 'IMPORTANT' prefix effectively signals priority, and the structure is front-loaded with the mandatory requirement followed by the post-call instruction.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an initialization tool with comprehensive annotations and schema coverage, the description provides sufficient context about its prerequisite nature and the need to process returned instructions. The main gap is not explaining what kind of instructions will be returned, but given the annotations and schema completeness, this is a minor omission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, which is acceptable given the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as an initialization step that must be called before any other tools. It specifies the verb 'call' and the resource 'Cirra AI MCP server', and distinguishes itself from siblings by being a prerequisite rather than a functional operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('before calling any other tools') and provides clear instructions on what to do after calling it ('read the instructions returned'). It distinguishes itself from all sibling tools by being a mandatory initialization step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_moreARead-onlyIdempotentInspect
Fetch the next page of a large tool response. Use the nextCursor from _pagination in a previous response. This tool loads data into the context window — prefer the artifact download URL when available.
| Name | Required | Description | Default |
|---|---|---|---|
| cursor | Yes | Pagination cursor from _pagination.nextCursor | |
| pageSize | No | (optional) Number of records to return in this page. If omitted, uses the page size from the original query. | |
| artifactId | Yes | Artifact ID from instructions.artifactId |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive hints, so safety is covered. Description adds valuable behavioral context: 'loads data into the context window' explains the side effect/mechanism, and clarifies the cursor source. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: (1) purpose, (2) required input source, (3) alternative preference. Front-loaded with action verb 'Fetch'. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-purpose pagination helper with 100% schema coverage and clear annotations, the description is complete. It explains the mechanism (context window loading), prerequisites (previous response cursor), and alternatives (artifact URL) without needing output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds semantic context by linking 'cursor' to '_pagination.nextCursor' and 'artifactId' to the 'artifact download URL' alternative, helping the agent understand the relationship between parameters and the pagination workflow.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Fetch the next page') and resource ('large tool response'). Clearly distinguishes from CRUD siblings (sobject_create, soql_query, etc.) by identifying itself as a pagination utility rather than a Salesforce operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite ('Use the nextCursor from _pagination in a previous response') and provides clear alternative guidance ('prefer the artifact download URL when available'), helping the agent decide between pagination and direct download.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
group_createBInspect
Create a new public group, queue or role in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name for the group | |
| type | Yes | The type of group to create | |
| label | No | The label for the group. Will be generated from the name if not provided. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| properties | No | Additional properties for the group | |
| description | Yes | The description for the group | |
| supportedObjects | No | The objects that the queue can access |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=false (mutation), openWorldHint=true, idempotentHint=false, and destructiveHint=false. The description adds minimal behavioral context beyond this, mentioning 'public' groups but not explaining what 'public' means in Salesforce context, permission requirements, or rate limits. It doesn't contradict annotations, but adds limited value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's appropriately sized for a creation tool and front-loads the essential action and resource. Every word earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema, the description is minimally complete. Annotations cover safety profile (non-destructive mutation), but the description doesn't explain what happens after creation (e.g., returns group ID, error conditions). Given the 7-parameter complexity and Salesforce context, more guidance about 'public' nature and creation outcomes would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 7 parameters thoroughly. The description adds no parameter-specific information beyond what's in the schema (e.g., no clarification about 'public' vs other group types, no examples of 'properties' usage). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('new public group, queue or role in Salesforce'), providing specific verb+resource pairing. However, it doesn't explicitly differentiate from sibling tools like 'group_update' or 'user_create' that also create Salesforce entities, missing full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'group_update' for modifications or 'user_create' for different entity types. There's no mention of prerequisites, exclusions, or specific contexts where this tool is preferred over other creation tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
group_membersADestructiveInspect
Add or remove users from public groups, queues, or roles in Salesforce.
| Name | Required | Description | Default |
|---|---|---|---|
| users | Yes | The names, usernames or IDs of the users | |
| groups | Yes | The names, labels or IDs of the public groups, queues or roles | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| operation | Yes | The operation to perform: 'add' to add users to groups, 'remove' to remove users from groups |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description aligns with annotations (mentions 'remove' matching destructiveHint=true) and adds specific entity context (queues, roles) beyond the tool name. However, it omits behavioral details beyond annotations, such as failure modes for invalid user IDs or whether partial successes occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single efficient sentence with zero waste. Every element earns its place: the dual operation verbs, target resources, specific entity types (distinguishing groups/queues/roles), and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich schema (100% coverage) and comprehensive annotations (destructive, non-idempotent hints), the description provides sufficient context for tool selection. It appropriately handles the complexity of batch membership operations without requiring output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description doesn't add syntax details or format examples for parameters (e.g., ID formats), but accurately reflects the dual operation nature (add/remove) specified in the operation enum.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Add or remove), the target resource (users), the destination entities (public groups, queues, or roles), and the system (Salesforce). This effectively distinguishes it from sibling tools like group_create or group_update which manage the groups themselves rather than membership.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying the operation type (membership management) and target entity types, but lacks explicit guidance on when to choose this over user_update or permission_set_assignments, and doesn't mention prerequisites like admin permissions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
group_updateBIdempotentInspect
Update a public group or queue in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| group | Yes | The name or ID of the public group or queue. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| properties | Yes | Properties to update |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a mutable (readOnlyHint: false), idempotent, non-destructive operation. The description adds that it updates 'public groups or queues,' which provides some context, but doesn't elaborate on behavioral aspects like authentication needs, rate limits, or what specific properties can be updated beyond what annotations cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations covering key behavioral traits (mutable, idempotent, non-destructive) and full schema coverage, the description is minimally adequate. However, without an output schema, it doesn't explain what the update returns, and as a mutation tool, more context on permissions or side effects would be helpful for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description doesn't add any additional meaning about parameters beyond implying the 'group' parameter identifies the target and 'properties' are updatable fields, which is already clear from the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('a public group or queue in Salesforce'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'group_create' or 'group_members', which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'group_create' or 'group_members', nor does it mention prerequisites or context for updating groups. It simply states what the tool does without usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
link_buildARead-onlyIdempotentInspect
Build Salesforce links for setup pages. Always use this tool when user requests a setup page link
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | The type of the link to build | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. If not provided, the default connection will be used | |
| setupConfig | No | Used with `setup` types. Describes the type of setup page to build link for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds that it 'builds' links, which aligns with read-only behavior (no data modification), and implies it's for setup pages, adding context about the target. However, it doesn't disclose additional traits like rate limits, authentication needs, or output format, which could be useful given no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded: two sentences that directly state the purpose and usage rule without any fluff. Every sentence earns its place by providing essential information efficiently, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, nested objects, no output schema) and rich annotations (covering read-only, idempotent, etc.), the description is reasonably complete. It clarifies the tool's role for setup pages and when to use it, but could benefit from mentioning the output (e.g., that it returns a URL) since there's no output schema, leaving some ambiguity about the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the 3 parameters (type, sf_user, setupConfig). The description mentions 'setup pages,' which relates to the 'type' parameter's const value 'setup' and the setupConfig object, but adds no new syntax or format details beyond what the schema provides. With high schema coverage, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Build Salesforce links for setup pages.' It specifies the verb ('Build') and resource ('Salesforce links for setup pages'), making the function understandable. However, it doesn't explicitly differentiate from sibling tools like 'sf_connection_manage' or 'sobject_describe' that might also involve Salesforce navigation or metadata operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Always use this tool when user requests a setup page link.' This clearly defines when to use the tool, though it doesn't mention when not to use it or name specific alternatives among siblings. The directive is strong and unambiguous for its intended context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
metadata_createBInspect
Create one or more Salesforce metadata elements of a specified type, such as custom fields, validation rules, or custom labels
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | The metadata type. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| metadata | Yes | The array of metadata records to create. Each record must include at least a 'fullName' property |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=false, openWorldHint=true, and idempotentHint=false. The description adds context about creating 'one or more' elements and specifying metadata type, but doesn't disclose behavioral traits like authentication needs, rate limits, transaction boundaries, or what happens on partial failures. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is efficient and front-loaded with the core purpose. No wasted words, though it could be slightly more structured by separating scope from examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no output schema and annotations covering safety, the description is adequate but lacks context on return values, error handling, or Salesforce-specific constraints. It doesn't fully compensate for the absence of output schema, leaving gaps in understanding what happens after invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal semantics by implying 'type' specifies metadata categories and 'metadata' contains records with 'fullName', but doesn't elaborate on format, constraints, or examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('Salesforce metadata elements') with specific examples (custom fields, validation rules, custom labels). It distinguishes from siblings like metadata_delete, metadata_update, and metadata_read by specifying creation, but doesn't explicitly contrast with metadata_describe or metadata_list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like sobject_create, record_type_create, or value_set_create. The description mentions metadata types but doesn't specify prerequisites, dependencies, or when to choose this over other creation tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
metadata_deleteADestructiveInspect
Delete one or more Salesforce metadata elements of a specified type, such as custom fields, validation rules, or custom labels. Maximum is 10 per batch - DO NOT EXCEED.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | The metadata type. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| fullNames | Yes | The full name(s) of the metadata element(s) to delete. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true and readOnlyHint=false, but the description adds valuable behavioral context beyond annotations: it specifies the batch limit of 10 elements and provides concrete examples of metadata types. This helps the agent understand practical constraints and typical use cases that annotations alone don't convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place: the first establishes purpose and scope, the second provides a critical constraint. It's front-loaded with the core functionality and wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description provides good context about what gets deleted and batch limits. However, it doesn't mention important behavioral aspects like whether deletions are reversible, what permissions are required, or what happens when the tool fails. Given the destructive nature, more cautionary guidance would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions metadata types generally but doesn't provide additional semantic context about the 'type' or 'fullNames' parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete'), the resource ('Salesforce metadata elements'), and provides specific examples of what can be deleted ('custom fields, validation rules, or custom labels'). It distinguishes itself from sibling tools like metadata_create, metadata_update, and metadata_read by specifying the destructive delete operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (deleting metadata elements) and includes an important constraint ('Maximum is 10 per batch - DO NOT EXCEED'). However, it doesn't explicitly mention when NOT to use it or name specific alternatives from the sibling list, such as metadata_update for modifications instead of deletions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
metadata_describeARead-onlyIdempotentInspect
List all Salesforce metadata types available in the org and their properties, such as directory name, suffix, and child objects
| Name | Required | Description | Default |
|---|---|---|---|
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| verbose | Yes | If false or missing, return only the names of the metadata types. This is the default. If true, return additional properties for each type. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotence. The description adds value by specifying the scope ('all Salesforce metadata types') and the properties returned ('directory name, suffix, and child objects'), which are not covered by annotations. It does not mention pagination behavior or rate limits, but annotations provide a solid baseline.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('List all Salesforce metadata types') and adds necessary detail without waste. Every word earns its place, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (metadata discovery), rich annotations (covering safety and behavior), and 100% schema coverage, the description is mostly complete. It lacks details on output format (no output schema) and pagination handling, but annotations and schema provide strong support. For a read-only, idempotent tool, this is sufficient but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters (sf_user, verbose, pageSize). The description does not add any parameter-specific details beyond what the schema provides, such as clarifying the 'verbose' parameter's effect on output format. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all Salesforce metadata types'), the resource ('available in the org'), and the scope ('and their properties, such as directory name, suffix, and child objects'). It distinguishes itself from sibling tools like 'metadata_list' (which likely lists instances) and 'metadata_read' (which reads specific metadata).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for discovering metadata types and their properties, which is clear in context. However, it does not explicitly state when to use this tool versus alternatives like 'tooling_api_describe' or 'sobject_describe', nor does it provide exclusions or prerequisites for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
metadata_listBRead-onlyIdempotentInspect
List Salesforce metadata elements of a specific type, such as flows, custom objects, or reports, optionally scoped by folder
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | The metadata type | |
| folder | No | The folder name (optional). If not provided, all folders are searched. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide clear hints (readOnlyHint: true, openWorldHint: true, idempotentHint: true, destructiveHint: false), covering safety and idempotency. The description adds minimal behavioral context beyond this, mentioning optional folder scoping but not detailing pagination behavior, rate limits, or authentication needs. It doesn't contradict annotations, so a baseline score is appropriate given the annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('List Salesforce metadata elements') and adds optional details ('optionally scoped by folder'). There's no wasted verbiage, making it appropriately concise, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no output schema) and rich annotations, the description is minimally adequate. It covers the basic purpose but lacks details on output format, error handling, or integration with sibling tools. The annotations provide safety context, but the description doesn't fully compensate for the absence of an output schema or deeper usage guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the input schema. The description adds little beyond this, only implying folder scoping without explaining parameter interactions or semantics. Since the schema does the heavy lifting, the baseline score of 3 is justified, as the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List Salesforce metadata elements') and resource type ('of a specific type, such as flows, custom objects, or reports'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'sobjects_list' or 'tooling_api_search' that might also list metadata, though the mention of 'Salesforce metadata elements' provides some implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'optionally scoped by folder,' which suggests this tool is for listing metadata with folder-based filtering. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'metadata_describe' or 'sobjects_list,' nor does it mention any prerequisites or exclusions, leaving usage somewhat ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
metadata_readBRead-onlyIdempotentInspect
Read full details for one or more Salesforce metadata elements of a specified type, such as flows, validation rules, or approval processes
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | The metadata type | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. | |
| fullNames | Yes | The full name(s) of the metadata elements to read. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds minimal behavioral context beyond this, such as the ability to handle 'one or more' elements, but lacks details on pagination behavior, error handling, or response format. With annotations providing strong coverage, the description adds some value but not rich behavioral insights.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It could be slightly improved by structuring with bullet points or examples, but it avoids redundancy and waste, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no output schema), annotations provide good safety and idempotency coverage, and the schema fully describes inputs. However, the description lacks details on output format, error cases, or pagination behavior (implied by the 'pageSize' parameter), leaving gaps in contextual understanding. It's adequate but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description implies the tool reads metadata based on type and full names, aligning with the schema, but doesn't add significant semantic details beyond what the schema provides. For example, it doesn't clarify the format of 'fullNames' or examples of valid 'type' values. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Read full details') and resource ('Salesforce metadata elements'), specifying the scope ('one or more' elements) and providing examples of metadata types ('flows, validation rules, or approval processes'). However, it doesn't explicitly differentiate from sibling tools like 'metadata_describe' or 'metadata_list', which likely serve related but distinct purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools such as 'metadata_describe' or 'metadata_list', nor does it specify prerequisites, exclusions, or appropriate contexts for usage. This leaves the agent without clear direction on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
metadata_updateAIdempotentInspect
Update one or more Salesforce metadata elements of a specified type, such as flows, validation rules, or custom labels
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | The metadata type. | |
| upsert | No | Whether to upsert the metadata. If true, the metadata will be upserted (created if it does not yet exist). If false, the metadata will be updated. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| metadata | Yes | The array of metadata records to update. Each record must include at least a 'fullName' property and other required fields for the type. See instructions for details. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds context about supporting multiple metadata elements and examples of types (flows, validation rules, custom labels), which helps the agent understand the tool's scope beyond the annotations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. It could be slightly more structured by explicitly mentioning the upsert capability, but it avoids redundancy and wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema, the description adequately covers the what and how, supported by rich annotations. However, it lacks details on error handling, response format, or dependencies (e.g., connection requirements), leaving some gaps for the agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 4 parameters. The description mentions 'type' and 'metadata' implicitly but adds no additional semantic details beyond what the schema provides, such as format examples or constraints. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update'), target ('Salesforce metadata elements'), and scope ('one or more'), with examples of metadata types. It distinguishes from siblings like metadata_create, metadata_delete, and metadata_read by specifying update operations, but doesn't explicitly contrast with metadata_describe or metadata_list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for updating existing metadata elements, with the 'upsert' parameter hinting at creation scenarios. However, it lacks explicit guidance on when to choose this over metadata_create or metadata_delete, or prerequisites like required permissions or connection setup.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
page_layout_cloneAInspect
Create a new page layout by cloning an existing layout in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| layout | Yes | The name or ID of the existing layout to clone. ID is preferred if you have it. If using a name, you must also provide the sObject | |
| sObject | Yes | The name of the sObject to which the layout applies. Not needed if you have provided the layout ID | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| namespace | Yes | The namespace prefix of the existing layout to clone. Not needed if you have provided the layout ID or if the layout has no namespace | |
| newLayoutName | Yes | Name of the new (cloned) page layout to create |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a write operation (readOnlyHint: false) that's not idempotent or destructive. The description adds minimal behavioral context beyond this, stating it 'creates a new page layout' which aligns with the annotations. It doesn't provide additional details about permissions needed, rate limits, or what happens if the new layout name already exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that states exactly what the tool does without unnecessary words. It's front-loaded with the core functionality and doesn't waste space on information already available in other fields like the annotations or input schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a write operation with no output schema, the description adequately explains what the tool does but lacks important contextual information. It doesn't describe what gets returned (e.g., the new layout ID), error conditions, or Salesforce-specific constraints. The annotations provide basic safety information, but more operational context would be helpful for this mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already thoroughly documents all 5 parameters. The description adds no parameter-specific information beyond what's in the schema, so it meets the baseline expectation but doesn't provide additional semantic context about how parameters interact or typical usage patterns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('create a new page layout by cloning an existing layout') and resource ('in Salesforce'), making the purpose immediately apparent. It distinguishes itself from sibling tools like 'page_layout_update' by focusing on cloning rather than updating existing layouts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'page_layout_update' or 'metadata_create'. It doesn't mention prerequisites, constraints, or typical scenarios where cloning a layout would be preferred over creating one from scratch or modifying an existing one.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
page_layout_updateADestructiveInspect
Update or rename a page layout in Salesforce. Supports modifying layout contents via JSON Patch, renaming a layout, or both in a single call
| Name | Required | Description | Default |
|---|---|---|---|
| patch | No | The array of modifications that need to be applied. This should be provided in the `JSON Patch` format. At least one of patch or newLayoutName must be provided | |
| layout | Yes | The name or ID of the layout. ID is preferred if you have it. If using a name, you must also provide the sObject | |
| sObject | No | The name of the sObject to which the layout applies. Not needed if you have provided the layout ID | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| namespace | No | The namespace prefix of the Page Layout. Not needed if you have provided the layout ID or if the layout has no namespace | |
| newLayoutName | No | The new name for the layout. When provided, the layout will be renamed (via clone and delete). Can be combined with patch to rename and update in a single call |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, readOnlyHint=false, openWorldHint=true, and idempotentHint=false. The description adds context by specifying that renaming occurs 'via clone and delete,' clarifying the destructive nature. It also mentions support for JSON Patch operations, which provides additional behavioral insight beyond annotations, though it doesn't detail error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first sentence and adding operational details in the second. Both sentences earn their place by clarifying scope and methods, with no redundant information. It could be slightly more structured by explicitly separating update vs. rename scenarios.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, destructive operation, no output schema), the description is adequate but not comprehensive. It covers what the tool does and methods (JSON Patch, rename via clone/delete), but lacks details on output format, error cases, or interaction with sibling tools. With annotations providing safety cues, it meets minimum viability but leaves gaps for agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal semantic value by mentioning 'JSON Patch' for the patch parameter and 'clone and delete' for newLayoutName, but doesn't provide additional context beyond what's in the schema descriptions. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Update or rename a page layout in Salesforce.' It specifies the verb ('update or rename'), resource ('page layout'), and platform ('Salesforce'), making the intent unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'page_layout_clone' or 'metadata_update,' which could handle similar operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'Supports modifying layout contents via JSON Patch, renaming a layout, or both in a single call,' which suggests when to use it for combined operations. However, it lacks explicit guidance on when to choose this tool over alternatives like 'page_layout_clone' for renaming or 'metadata_update' for modifications, and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
permission_set_assignmentsCDestructiveInspect
Assign or remove permission sets from users in Salesforce.
| Name | Required | Description | Default |
|---|---|---|---|
| users | Yes | The names, usernames or IDs of the users | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| operation | Yes | The operation to perform: 'add' to assign permission sets to users, 'remove' to remove assignments from users | |
| permissionSets | Yes | The names, labels or IDs of the permission sets |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare destructiveHint=true and idempotentHint=false, the description fails to disclose what specifically is destroyed (PermissionSetAssignment records) or explain the non-idempotent behavior (e.g., whether adding an existing assignment fails silently or errors). It adds no context about batch limits or Salesforce-specific side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single 9-word sentence is front-loaded and contains no filler, earning high marks for structural efficiency. However, the extreme brevity borders on under-specification for a destructive operation with multiple array parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the destructive nature, 4 parameters, array inputs implying batch operations, and no output schema, the description is insufficient. It omits expected behavior on partial failures, returns values, and the irreversible nature of removing assignments.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score is 3. The description does not augment the schema's parameter documentation (e.g., elaborating on valid identifier formats for 'users' or 'permissionSets' beyond what the schema property descriptions already provide).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Assign' and 'remove') and clearly identifies the resource ('permission sets') and scope ('from users in Salesforce'). It implicitly distinguishes from sibling 'permission_set_update' by focusing on user assignments rather than metadata updates, though it does not explicitly contrast with other user management tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to prefer this tool over alternatives like 'user_update' or 'permission_set_update', nor does it mention prerequisites (e.g., admin privileges) or advise on choosing between 'add' and 'remove' operations beyond the literal words.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
permission_set_updateBIdempotentInspect
Update the properties or contents of a permission set in Salesforce, including read and edit access to objects and fields
| Name | Required | Description | Default |
|---|---|---|---|
| patch | Yes | The array of modifications that need to be applied. This should be provided in the `JSON Patch` format | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| namespace | No | The namespace of the permission set, if applicable | |
| permissionSet | Yes | The name or ID of the permission set. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false (mutation), openWorldHint=true (flexible inputs), idempotentHint=true (safe to retry), and destructiveHint=false (non-destructive). The description adds that it updates 'properties or contents' and specifies 'read and edit access to objects and fields', giving context on what can be modified. However, it doesn't disclose behavioral details like required permissions, side effects, or rate limits beyond what annotations cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Update...') and includes key details ('properties or contents', 'read and edit access'). There's no wasted verbiage, but it could be slightly more structured by explicitly separating purpose from scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema, the description adequately covers the basic purpose and scope. Annotations provide safety and idempotency hints, but the description lacks details on error handling, response format, or complex use cases (e.g., partial updates). Given the 4 parameters and JSON Patch complexity, more guidance on typical operations would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description mentions updating 'properties or contents' and 'read and edit access to objects and fields', which aligns with the 'patch' parameter's purpose but doesn't add significant meaning beyond the schema's detailed JSON Patch format explanation. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('permission set in Salesforce'), specifying what properties can be modified ('including read and edit access to objects and fields'). It distinguishes from other permission-related tools like 'permission_set_assignments' by focusing on modifying the permission set itself rather than assignments. However, it doesn't explicitly contrast with sibling 'metadata_update' which might also handle permission sets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing admin permissions), when not to use it (e.g., for bulk assignments), or direct alternatives among siblings like 'metadata_update' for similar operations. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
profile_cloneBInspect
Clone an existing user profile in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| profile | Yes | The name of the new Profile | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| clonedProfileName | Yes | The name of existing Profile which will be cloned |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide key behavioral hints (readOnlyHint=false, destructiveHint=false, etc.), so the description doesn't need to repeat these. It adds minimal context by specifying the cloning action, but doesn't elaborate on side effects, permissions needed, or what happens if the cloned profile already exists. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words, making it easy to parse and understand immediately. It's appropriately sized for the tool's complexity and gets straight to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations cover behavioral aspects and the schema fully describes parameters, the description is adequate but minimal. However, without an output schema, it doesn't explain what the tool returns (e.g., success confirmation, new profile details), leaving a gap in completeness for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description doesn't add any additional semantic context about the parameters beyond implying cloning from an existing profile, which is already covered by the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Clone') and resource ('an existing user profile in Salesforce'), making the purpose immediately understandable. However, it doesn't differentiate this tool from sibling tools like 'profile_describe' or 'profile_update' beyond the cloning action, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'profile_update' or 'profile_describe', nor does it mention prerequisites or context for cloning profiles. It simply states what the tool does without indicating appropriate scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
profile_describeBRead-onlyIdempotentInspect
Return detailed metadata for a Salesforce Profile.
| Name | Required | Description | Default |
|---|---|---|---|
| profile | Yes | The name or ID of the Profile. | |
| sObject | Yes | Use this to return only permissions related to the a specific sObject type. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| permissionTypes | Yes | List of specific permission types to return. Use this whenever possible (especially for standard profiles) to reduce the size of the response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive). The description adds minimal context by implying it returns 'detailed metadata,' but doesn't elaborate on response format, size considerations, or authentication needs beyond what the schema's 'sf_user' parameter suggests. It doesn't contradict annotations, but offers limited additional value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It's front-loaded with the core purpose, making it efficient and easy to parse. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema and the tool's complexity (4 parameters, 3 required), the description is somewhat incomplete. It doesn't hint at the response structure or potential data volume, which could be crucial for a metadata tool. However, annotations provide safety context, and the schema covers inputs well, making it minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all parameters. The description doesn't add any meaning beyond the schema, such as explaining interactions between 'profile', 'sObject', and 'permissionTypes'. This meets the baseline for high schema coverage but doesn't enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return detailed metadata for a Salesforce Profile.' It specifies the verb ('return') and resource ('Salesforce Profile'), making the action clear. However, it doesn't differentiate from sibling tools like 'user_describe' or 'metadata_describe', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'profile_clone' or 'profile_update', nor does it specify prerequisites or contexts for usage. This leaves the agent without explicit direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
profile_updateAIdempotentInspect
Update a Salesforce user profile, including object permissions, field-level security, tab visibility, and system permissions
| Name | Required | Description | Default |
|---|---|---|---|
| patch | Yes | The array of modifications that need to be applied. This should be provided in the `JSON Patch` format | |
| profile | Yes | The name or ID of the Profile | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide key behavioral hints (readOnlyHint=false, destructiveHint=false, idempotentHint=true, openWorldHint=true). The description adds context about what aspects of a profile can be updated, but does not disclose additional behavioral traits like authentication requirements, rate limits, or side effects beyond what annotations already cover. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and lists key updatable elements. There is no wasted verbiage or redundant information, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (updating profiles with JSON Patch operations) and lack of output schema, the description is reasonably complete. It outlines the scope of updates, but could benefit from mentioning the JSON Patch format or typical use cases. Annotations provide good behavioral context, compensating for some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description does not add any parameter-specific details beyond what the schema provides (e.g., it doesn't explain the 'patch' parameter's JSON Patch format or give examples). Baseline score of 3 is appropriate when schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('Salesforce user profile'), and specifies the scope of what can be updated ('including object permissions, field-level security, tab visibility, and system permissions'). It distinguishes itself from sibling tools like 'profile_describe' (read-only) and 'profile_clone' (different operation).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for modifying profile settings, but does not explicitly state when to use this tool versus alternatives like 'permission_set_update' or 'user_update'. It provides no guidance on prerequisites, exclusions, or specific scenarios where this tool is preferred over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
record_type_createBInspect
Create a new sObject record type in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name of the new record type | |
| label | Yes | The label for the new record type | |
| active | No | Whether the new record type should be active | |
| sObject | Yes | The name of the object for which the record type is being created. Include a namespace prefix for custom objects if applicable | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| description | Yes | The description for the new record type | |
| defaultLayout | Yes | The default layout to use for all profiles. If not provided, the default layout will be the standard layout for the object | |
| businessProcess | Yes | The full name or ID of the business process associated with the record type | |
| existingRecordType | Yes | The name of the existing record type to use as a basis for the new record type. If not provided, the Master record type will be used | |
| defaultAvailability | Yes | The default availability settings for the new record type on all profiles. If not specified, the record type will be hidden for all profiles | |
| availabilityOverrides | Yes | An (optional) array of record type availabilities overrides for one or more profiles | |
| compactLayoutAssignment | Yes | The compact layout that is assigned to the record type. | |
| layoutAssignmentOverrides | Yes | An (optional) array of page layout assignments overrides for one or more profiles |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, non-idempotent, open-world operation. The description adds minimal behavioral context beyond this—it confirms a creation action but doesn't elaborate on side effects, permissions required, or system impacts. No contradiction with annotations exists, but the description doesn't significantly enhance understanding of the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence that efficiently conveys the core purpose without any fluff or redundancy. It's front-loaded with the essential action and resource, making it highly concise and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (13 parameters, no output schema) and rich annotations, the description is minimally adequate. It states what the tool does but lacks details on usage context, behavioral nuances, or output expectations. For a creation tool with many parameters and no output schema, more guidance would be beneficial, but it's not entirely incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema thoroughly documents all 13 parameters. The description adds no parameter-specific information beyond implying the creation of a 'record type', which is already clear from the tool name and schema. This meets the baseline for high schema coverage without providing extra semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('new sObject record type in Salesforce'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its sibling 'record_type_update' or other metadata creation tools like 'metadata_create' or 'sobject_create', which would require explicit comparison to achieve a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'record_type_update', 'metadata_create', and 'sobject_create' available, there's no indication of the specific context for creating record types versus other metadata or objects, nor any prerequisites or constraints mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
record_type_updateBIdempotentInspect
Update or activate/deactivate an sObject record type in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| label | Yes | The new human readable label | |
| active | Yes | Use this property to activate or deactivate a record type | |
| newName | Yes | The new API name | |
| sObject | Yes | The name of the object | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| recordType | Yes | The name or ID of the record type to update | |
| description | Yes | The updated description of the record type | |
| defaultLayout | Yes | The default layout to use for all profiles. If not provided, the default layout will be the standard layout for the object | |
| businessProcess | Yes | The name of the business process associated with the record type | |
| defaultAvailability | Yes | The default availability settings for the new record type | |
| availabilityOverrides | Yes | An (optional) array of record type availabilities overrides for one or more profiles | |
| compactLayoutAssignment | Yes | The compact layout that is assigned to the record type. | |
| layoutAssignmentOverrides | Yes | An (optional) array of page layout assignments overrides for one or more profiles |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-destructive, idempotent, mutable operation (readOnlyHint: false, destructiveHint: false, idempotentHint: true). The description adds value by specifying it handles activation/deactivation, which isn't covered by annotations. However, it doesn't disclose behavioral details like permission requirements, side effects, or response format, leaving gaps despite good annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality ('update or activate/deactivate') without unnecessary words. It earns its place by clearly stating the action and resource, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (13 parameters, 12 required) and lack of output schema, the description is minimally adequate. Annotations cover safety and idempotency, but the description doesn't address critical context like what happens on update failure, how to handle partial updates, or typical use cases. It relies heavily on structured data without compensating for gaps in behavioral guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description doesn't add any meaningful parameter semantics beyond what's in the schema—it doesn't explain relationships between parameters (e.g., how 'active' interacts with other fields) or provide usage examples. Baseline 3 is appropriate as the schema carries the full burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('update or activate/deactivate') and resource ('sObject record type in Salesforce'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'record_type_create' or 'metadata_update', which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'record_type_create' for new record types or 'metadata_update' for other metadata updates. There's no mention of prerequisites, dependencies, or specific contexts for activation/deactivation versus updating other properties.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
report_runARead-onlyIdempotentInspect
Execute a Salesforce report by ID and return its results, with optional filters and format overrides
| Name | Required | Description | Default |
|---|---|---|---|
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. | |
| reportId | Yes | The ID of the report to execute (15 or 18 character Salesforce ID) | |
| maxRecords | Yes | Maximum number of records to return. Defaults to 2000 | |
| includeDetails | Yes | Whether to include detailed row data in the response. Defaults to true | |
| reportMetadata | Yes | Optional report metadata to override report configuration |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a read-only, non-destructive, idempotent operation with open-world data. The description adds valuable context by specifying that it 'returns its results,' which clarifies the output behavior beyond annotations. It doesn't contradict annotations but provides additional operational insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Execute a Salesforce report by ID') and includes key optional features. There's no wasted verbiage, and every word contributes to understanding the tool's functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, nested objects) and rich annotations, the description adequately covers the purpose and key features. However, without an output schema, it could benefit from more detail on result structure or pagination behavior, though it mentions 'return its results' which provides some guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all parameters. The description mentions 'optional filters and format overrides,' which aligns with parameters like 'reportMetadata' but doesn't add new semantic details beyond what the schema provides. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Execute a Salesforce report by ID') and the resource ('Salesforce report'), distinguishing it from sibling tools like 'soql_query' or 'metadata_read' which serve different purposes. It explicitly mentions the core functionality of running reports with optional modifications.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'optional filters and format overrides,' suggesting it's for customized report execution. However, it doesn't explicitly state when to use this tool versus alternatives like 'soql_query' for raw data queries or other metadata tools, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sf_connection_manageADestructiveInspect
Manage connections to Salesforce orgs associated with the user's Cirra AI account. Call cirra_ai_init at least once before using this tool.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | The action to perform. Options: 'list' (list all connections), 'describe' (provide details of the connection), 'add' (add a new connection), 'reauthenticate' (refresh auth for a connection), 'remove' (remove a connection) | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies the prerequisite call to 'cirra_ai_init,' which isn't covered by annotations. Annotations already indicate destructiveHint=true and readOnlyHint=false, so the agent knows this can perform destructive operations. The description doesn't contradict annotations but provides additional operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: two sentences that directly state the tool's purpose and a critical prerequisite. Every sentence earns its place with no wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (managing connections with multiple actions), annotations provide good behavioral hints (destructive, non-idempotent, open-world), and schema coverage is complete. The description adds the crucial 'cirra_ai_init' prerequisite. However, without an output schema, the description could benefit from mentioning what the tool returns (e.g., connection lists or status).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema. According to guidelines, when schema coverage is high (>80%), the baseline score is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Manage connections to Salesforce orgs associated with the user's Cirra AI account.' It specifies the resource (Salesforce connections) and the action (manage). However, it doesn't explicitly differentiate from sibling tools like 'cirra_ai_init' beyond the prerequisite mention.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: 'Call cirra_ai_init at least once before using this tool.' This establishes a prerequisite relationship. However, it doesn't specify when to use this tool versus alternatives (e.g., when to use 'add' vs. other connection-related operations) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sobject_createAInspect
Create a new custom object (sObject) in Salesforce with specified sharing model, deployment status, and configuration options
| Name | Required | Description | Default |
|---|---|---|---|
| label | Yes | The human readable label for the field. If it is not provided, use the sObject property after stripping any suffix, replacing underscores with spaces and capitalizing the elements | |
| sObject | Yes | The name of the object. For example, Account. | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| description | Yes | Provides a description of the object's purpose. Generate a value for this if it is not provided | |
| pluralLabel | Yes | The plural version of the human readable label for the field | |
| enableSearch | No | Whether to enable search for the object. Default is false | |
| sharingModel | Yes | Common options are `ReadWrite`, `Read` and `Private` | |
| enableHistory | No | Whether to enable field history tracking for the object | |
| enableReports | No | Whether to enable reports for the object | |
| nameFieldType | Yes | The type of the name field. Options are `AutoNumber` or `Text` (the default) | |
| nameFieldLabel | No | The label for the name field. Defaults to `<obj label> Name` | |
| deploymentStatus | Yes | Options are `Deployed` (the default) and `In Development` | |
| enableActivities | No | Whether to enable activities for the object | |
| allowInChatterGroups | No | Whether to allow the object to be used in Chatter groups | |
| nameFieldDisplayFormat | No | For `AutoNumber` name fields only: the format to use for the display of the auto-number. For example, `A-{0000}` | |
| nameFieldStartingNumber | No | For `AutoNumber` name fields only: the starting number for the auto-number. For example, `1000` |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=false, openWorldHint=true, and idempotentHint=false. The description adds value by specifying it creates 'a new custom object' and mentions configuration options, which aligns with annotations. However, it doesn't elaborate on potential side effects, permissions needed, or Salesforce-specific limitations beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Create a new custom object in Salesforce') and includes key details without redundancy. Every word contributes to understanding the tool's function, with no wasted phrases or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (16 parameters, 7 required) and lack of output schema, the description is adequate but minimal. It covers the basic purpose and hints at parameters, but doesn't address return values, error conditions, or Salesforce-specific nuances. With annotations providing safety and idempotency hints, it's complete enough for basic use but could benefit from more context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description mentions 'sharing model, deployment status, and configuration options,' which maps to some parameters but doesn't add significant meaning beyond the schema. It doesn't explain parameter interactions or provide examples, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new custom object'), specifies the target resource ('in Salesforce'), and lists key configuration aspects ('sharing model, deployment status, and configuration options'). It distinguishes itself from sibling tools like sobject_describe, sobject_update, and sobjects_list by focusing on creation rather than querying or updating.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for creating custom Salesforce objects but doesn't explicitly state when to use this tool versus alternatives. It doesn't mention prerequisites, constraints, or compare with similar tools like metadata_create or record_type_create, leaving the agent to infer context from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sobject_describeBRead-onlyIdempotentInspect
Return basic metadata properties for the specified sObject, as well as a list of fields, relationships and record types.
| Name | Required | Description | Default |
|---|---|---|---|
| sObject | Yes | The API name, label or ID of the sObject | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds context about what metadata is returned (fields, relationships, record types), which is useful behavioral information not in annotations. However, it doesn't mention rate limits, authentication needs, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and specifies included metadata components. Every word earns its place with zero redundancy or fluff. It's appropriately sized for a metadata retrieval tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations cover safety/idempotency and schema covers parameters, the description adequately explains what metadata is returned. However, with no output schema, the description doesn't specify return format, structure, or pagination behavior. For a metadata tool with rich annotations but no output schema, more detail on response format would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema. With complete schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding but doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('return') and resource ('basic metadata properties for the specified sObject'), and specifies what's included ('list of fields, relationships and record types'). It distinguishes from generic 'describe' tools by specifying sObject metadata, but doesn't explicitly differentiate from sibling 'metadata_describe' or 'profile_describe' tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'metadata_describe', 'profile_describe', or 'tooling_api_describe'. It doesn't mention prerequisites, use cases, or exclusions. The agent must infer usage from the tool name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sobject_dmlADestructiveInspect
Create, update, delete, or upsert Salesforce sObject records via DML operations
| Name | Required | Description | Default |
|---|---|---|---|
| records | No | Array of records to process. For an 'update' operation the records must include an `Id` property`. For 'create' or 'upsert' the records may **not** have an `Id`. Do not use this property for the 'delete' operation | |
| sObject | Yes | API name of the sObject | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| operation | Yes | Type of DML operation to perform. Always ask for explicit user permission before executing any of these operations | |
| recordIds | No | Only used for the 'delete' operation: the IDs of the record to deletes | |
| dmlOptions | No | Optional DML options to use for the operation | |
| externalIdField | No | External ID field name. Required for upsert operations, ignored in other cases |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies the four specific DML operations (insert, update, delete, upsert) and mentions they're performed 'via DML operations.' Annotations already indicate destructiveHint=true and readOnlyHint=false, so the description doesn't contradict them. However, it doesn't elaborate on rate limits, authentication details, or error handling beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality (create, update, delete, upsert) and specifies the resource (Salesforce sObject records) and method (via DML operations). There's no wasted verbiage or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex, destructive tool with 7 parameters, no output schema, and rich annotations, the description is adequate but minimal. It covers the basic purpose and operations but doesn't address return values, error conditions, or integration with sibling tools. Given the annotations provide safety context (destructiveHint=true), the description meets minimum viability without being comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 7 parameters, including their purposes, constraints, and usage rules (e.g., 'records' requirements per operation). The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: performing DML operations (create, update, delete, upsert) on Salesforce sObject records. It specifies both the action and resource, though it doesn't explicitly differentiate from sibling tools like 'sobject_create', 'sobject_update', or 'tooling_api_dml' which appear to handle similar operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context by listing the four DML operations, and the schema includes a warning to 'always ask for explicit user permission before executing any of these operations.' However, it doesn't explicitly state when to use this tool versus alternatives like 'sobject_create' or 'tooling_api_dml', nor does it provide exclusion criteria or prerequisites beyond the permission warning.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sobject_field_createBInspect
Create a new custom field for an sObject in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| label | Yes | The label for the field | |
| sObject | Yes | The API name, label or ID of the sObject | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| fieldName | Yes | The API name of the new field. For example, NewField__c | |
| fieldType | Yes | The type of the field | |
| defaultFLS | No | The default Field-Level Security (FLS) setting to apply for all profiles for the new field | |
| properties | No | A map of properties used when creating the field. Some may be required, depending on the field type. See instructions for details | |
| description | Yes | The description for the field | |
| flsOverrides | No | An (optional) array of Field-Level Security (FLS) overrides for one or more profiles | |
| inlineHelpText | Yes | The inline help text for the field |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a write operation (readOnlyHint: false) and not idempotent or destructive. The description adds that this creates a 'new' field, which aligns with annotations. However, it doesn't provide additional behavioral context like permission requirements, Salesforce edition limitations, deployment implications, or whether the operation is synchronous/asynchronous. With annotations covering basic safety, this earns a baseline score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that states the core purpose without unnecessary words. It's front-loaded with the essential information (create field for sObject) and contains zero fluff or redundant phrasing. Every word earns its place in this concise statement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 10 parameters (including nested objects), no output schema, and annotations covering only basic hints, the description is minimal. While concise, it doesn't address the tool's complexity - it doesn't mention the extensive parameter set, field type dependencies, FLS considerations, or what happens after creation. The description alone is insufficient for understanding the full scope of this metadata creation operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema itself. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. It doesn't explain relationships between parameters (e.g., how fieldType influences properties requirements) or provide examples beyond the basic purpose statement. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new custom field') and resource ('for an sObject in Salesforce'), making the purpose immediately understandable. It distinguishes this as a field creation tool rather than field update (sobject_field_update exists as a sibling). However, it doesn't specify that this is for custom fields specifically (vs standard fields), which slightly limits differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like sobject_field_update, metadata_create, and metadata_update available, there's no indication of when field creation is appropriate versus other metadata operations or when to update existing fields instead. No prerequisites or context for usage are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sobject_field_updateBIdempotentInspect
Update properties of a custom sObject field (standard or custom) in Salesforce, including local picklist values
| Name | Required | Description | Default |
|---|---|---|---|
| sObject | Yes | The API name, label or ID of the sObject | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| fieldName | Yes | The API name, label or ID of the field. For example Industry, Segment__c, SomeNamespace__SomeField__c, 'Some Field' or 00NEk00000B8BYE. | |
| flsUpdates | No | An (optional) array of Field-Level Security (FLS) settings to update for one or more profiles | |
| properties | Yes | Properties to update. Some are required, depending on the field type. See instructions for details |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false (mutation), openWorldHint=true (flexible properties), idempotentHint=true (safe for retries), and destructiveHint=false (non-destructive). The description adds valuable context about updating 'local picklist values'—a specific behavioral trait not covered by annotations. However, it doesn't mention authentication requirements, rate limits, or side effects like validation rules being triggered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that efficiently conveys the core purpose without redundancy. It's front-loaded with the main action and resource, though it could be slightly more structured by separating usage notes from the core description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 5 parameters, nested objects, no output schema, and rich annotations, the description is adequate but incomplete. It covers the 'what' (updating field properties) but lacks details on 'how' (e.g., property format examples), 'when' (prerequisites), and 'what happens' (response format, error cases). The annotations help but don't fully compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description adds minimal semantic value beyond the schema—it mentions 'local picklist values' which relates to the 'properties' parameter but doesn't elaborate on syntax or constraints. Baseline 3 is appropriate given the comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update properties') and resource ('custom sObject field in Salesforce'), including the specific capability for local picklist values. It distinguishes itself from sibling tools like 'sobject_field_create' (creation vs. update) and 'sobject_update' (field vs. object-level updates), though it doesn't explicitly name these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'sobject_field_create' or 'metadata_update'. The description mentions updating 'standard or custom' fields but doesn't clarify prerequisites, permissions needed, or typical scenarios for field updates versus other metadata operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sobjects_listARead-onlyIdempotentInspect
Lists all the available sObject types with their API names and labels. To get more details about an sObject, use sobject_describe
| Name | Required | Description | Default |
|---|---|---|---|
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. | |
| customObjectsOnly | Yes | if true, list only custom objects |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide comprehensive behavioral hints (readOnlyHint, openWorldHint, idempotentHint, destructiveHint). The description adds valuable context about pagination behavior ('when the response requires pagination') and the relationship to `sobject_describe`, but doesn't fully explain what 'available sObject types' means in terms of permissions or visibility.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve a clear purpose: the first states what the tool does, the second provides usage guidance. There's no wasted language or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (which cover safety, idempotence, and world openness) and 100% schema coverage, the description provides adequate context. The main gap is the lack of output schema, but the description does explain what information is returned ('API names and labels'). It could be more specific about the response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters. The description doesn't add any parameter-specific information beyond what's already in the schema, so it meets the baseline expectation but doesn't provide additional semantic context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Lists all the available sObject types') and the resources involved ('with their API names and labels'). It distinguishes from sibling tools by explicitly mentioning the alternative `sobject_describe` for getting more details about a specific sObject.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'To get more details about an sObject, use `sobject_describe`'. This clearly indicates the boundary between listing objects and describing individual objects.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sobject_updateAIdempotentInspect
Update properties of a custom Salesforce object, such as label, plural label, description, sharing model, or deployment status
| Name | Required | Description | Default |
|---|---|---|---|
| sObject | Yes | The API name, label or ID of the custom sObject | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| properties | Yes | A map of properties to update on the object. For example: 'label'. At least one property must be updated |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=false, idempotentHint=true, and openWorldHint=true, indicating this is a non-destructive, idempotent mutation that may accept unknown properties. The description adds value by specifying the types of properties that can be updated (e.g., label, sharing model) and implying it's for custom objects, which gives context beyond annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Update properties of a custom Salesforce object') and provides specific examples without unnecessary elaboration. Every word contributes to clarity, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation of Salesforce objects with 3 parameters, nested objects, no output schema), the description is reasonably complete. It specifies the resource type and example properties, and annotations cover safety and idempotency. However, it doesn't address potential side effects, permission requirements, or response format, leaving some gaps for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for all parameters (sObject, sf_user, properties). The description adds minimal semantic value beyond the schema, mentioning example properties like 'label' but not detailing format or constraints. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update properties') and target resource ('custom Salesforce object'), with specific examples of properties that can be updated (label, plural label, description, sharing model, deployment status). It distinguishes from sibling tools like 'sobject_create' and 'sobject_describe' by focusing on property updates rather than creation or description. However, it doesn't explicitly differentiate from 'metadata_update' which might handle similar updates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'custom Salesforce object' and listing example properties, suggesting it's for modifying existing objects. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'metadata_update' or 'sobject_field_update', nor does it mention prerequisites or exclusions. The usage is clear but lacks comparative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
soql_queryARead-onlyIdempotentInspect
Run a Salesforce SOQL query to return a list of sObject records, with automatic masking of encrypted fields
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | The maximum number of records to return. The default is 200. Note: LIMIT is automatically omitted for aggregate queries without GROUP BY. | |
| fields | Yes | List of fields to retrieve. May include relationship fields and aggregates | |
| groupBy | No | GROUP BY clause for aggregate queries. Required when using aggregate functions with grouping, optional otherwise | |
| orderBy | No | (optional) ORDER BY clause. May include fields from related objects | |
| sObject | Yes | The name of the Salesforce object to query | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. | |
| whereClause | Yes | WHERE clause. May include conditions on related objects. Do NOT include APEX snippets or variables: use only literal values | |
| havingClause | No | HAVING clause to filter grouped results by aggregate conditions (e.g. 'COUNT(Id) > 5'). Requires groupBy to be set |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, idempotentHint=true, and destructiveHint=false, establishing safety. The description adds valuable behavioral context beyond annotations: 'automatic masking of encrypted fields' reveals a security/transformation behavior, and 'return a list of sObject records' clarifies the output format. This compensates well for the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence with zero wasted words. It front-loads the core purpose and includes a critical behavioral detail (field masking). Every element serves a clear purpose in helping an agent understand the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, no output schema) and rich annotations, the description provides good contextual coverage. It clarifies the output format ('list of sObject records') and a key behavioral trait (field masking), which helps compensate for missing output schema. However, it could better address usage scenarios or limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema thoroughly documents all 9 parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation but doesn't provide extra semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Run a Salesforce SOQL query'), the resource ('sObject records'), and a key distinguishing feature ('automatic masking of encrypted fields'). It uses precise technical terminology that differentiates it from sibling tools like tooling_api_query or report_run.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (querying Salesforce data) but doesn't explicitly state when to use this tool versus alternatives like tooling_api_query or sobject_describe. It mentions automatic field masking, which provides some guidance about when this tool might be preferred, but lacks explicit comparison or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tooling_api_describeARead-onlyIdempotentInspect
List all objects available through the Salesforce Tooling API and their properties. Use tooling_api_query with FieldDefinition to get field details
| Name | Required | Description | Default |
|---|---|---|---|
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| verbose | Yes | If false or missing, return only the names of the objects. This is the default. If true, return additional properties for each object. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds valuable context by mentioning pagination ('response requires pagination') and specifying that verbose mode returns additional properties, which are behavioral traits not captured in annotations. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a specific usage guideline in the second. Both sentences earn their place by providing essential information without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (listing API objects), rich annotations (covering safety and behavior), and 100% schema coverage, the description is largely complete. It explains the tool's purpose, usage context, and behavioral aspects like pagination and verbose mode. However, without an output schema, it could benefit from more detail on return formats, but annotations and schema compensate well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the schema. The description does not add any parameter-specific information beyond what the schema provides, such as explaining the format of returned properties or default behaviors for missing parameters. Thus, it meets the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all objects') and resource ('available through the Salesforce Tooling API and their properties'), distinguishing it from sibling tools like tooling_api_query (which queries specific objects) and tooling_api_search (which searches). The description explicitly differentiates by explaining that for field details, one should use tooling_api_query instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Use tooling_api_query with FieldDefinition to get field details.' This clearly indicates that this tool is for listing objects and their properties, while field details require a different tool, helping the agent choose correctly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tooling_api_dmlADestructiveInspect
Create, update, delete, or upsert Salesforce Tooling API records, such as Apex classes, triggers, and custom metadata
| Name | Required | Description | Default |
|---|---|---|---|
| record | No | The record to process, with all relevant fields. For an 'update' operation the record object must include an `Id` property`. For 'create' or 'upsert' the record may **not** have an `Id`. Do not use this property for the 'delete' operation | |
| sObject | Yes | API name of the Tooling API object | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| recordId | No | Only used for the 'delete' operation: the ID of the record to delete | |
| operation | Yes | Type of DML operation to perform. Always ask for explicit user approval before executing any of these operations, and do not proceed without it | |
| externalIdField | No | External ID field name. Required for upsert operations, ignored in other cases |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide key behavioral hints (destructiveHint: true, readOnlyHint: false, etc.), but the description adds valuable context by specifying the types of records handled (e.g., Apex classes, triggers, custom metadata) and the tool's scope (Salesforce Tooling API). It does not contradict annotations, as 'destructiveHint: true' aligns with the described DML operations, and it supplements annotations with practical examples.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality ('Create, update, delete, or upsert') and provides relevant examples without unnecessary details. Every word contributes to understanding the tool's purpose, making it appropriately sized and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (DML operations with 6 parameters, destructive behavior) and rich annotations, the description is reasonably complete. It covers the tool's scope and examples, though without an output schema, it does not explain return values. For a tool with full schema coverage and annotations, it provides adequate context, but could benefit from more usage guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema thoroughly documents all 6 parameters, including their purposes and usage rules (e.g., 'record' requirements per operation). The description does not add significant parameter semantics beyond what the schema provides, such as explaining 'sObject' values or 'externalIdField' details, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Create, update, delete, or upsert') and resources ('Salesforce Tooling API records'), including examples ('Apex classes, triggers, and custom metadata'). It distinguishes itself from siblings like 'tooling_api_describe', 'tooling_api_query', and 'tooling_api_search' by focusing on DML operations rather than querying or describing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for DML operations on Tooling API objects but does not explicitly state when to use this tool versus alternatives like 'sobject_dml' or 'metadata_create/update/delete'. It lacks clear exclusions or prerequisites, though the input schema's 'operation' parameter includes a guideline to 'Always ask for explicit user approval before executing any of these operations'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tooling_api_queryARead-onlyIdempotentInspect
Run a SOQL query against the Salesforce Tooling API to retrieve metadata objects like Apex classes, triggers, custom fields, and field definitions
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | The maximum number of records to return. The default is 200. Note: LIMIT is automatically omitted for aggregate queries without GROUP BY. | |
| fields | Yes | List of fields to retrieve. May include relationship fields and aggregates | |
| groupBy | No | GROUP BY clause for aggregate queries. Required when using aggregate functions with grouping, optional otherwise | |
| orderBy | No | (optional) ORDER BY clause | |
| sObject | Yes | The name of the Tooling API object to query | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. | |
| whereClause | Yes | WHERE clause. May include conditions on related objects |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, idempotent, and open-world. The description adds useful context about the types of metadata objects retrievable (Apex classes, triggers, etc.), but doesn't provide additional behavioral details like rate limits, authentication requirements, or pagination behavior. With annotations covering the safety profile, this earns a baseline 3.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words. Every element serves a purpose: verb, target API, query language, and example metadata types.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, SOQL queries against Tooling API) and lack of output schema, the description provides good context about what types of metadata can be retrieved. However, it doesn't mention response format, pagination behavior, or error handling, which would be helpful for a query tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are well-documented in the schema itself. The description doesn't add any parameter-specific information beyond what's in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even without parameter details in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Run a SOQL query'), target resource ('Salesforce Tooling API'), and scope ('retrieve metadata objects like Apex classes, triggers, custom fields, and field definitions'). It distinguishes this tool from sibling tools like 'soql_query' (likely for standard objects) and 'tooling_api_search' (different query mechanism).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying this is for 'metadata objects' via the Tooling API, which implicitly distinguishes it from standard object queries. However, it doesn't explicitly state when NOT to use this tool or name specific alternatives like 'soql_query' for standard objects or 'tooling_api_search' for search operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tooling_api_searchARead-onlyIdempotentInspect
Run a SOSL search query against the Salesforce Tooling API to find matching metadata objects like Apex classes, triggers, and flows
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The SOSL query to execute against the Tooling API. Please take into account all available documentation on object types, field names and limitations for the Tooling API SOSL queries | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| pageSize | No | (optional) Maximum number of records to return per page when the response requires pagination. If omitted, page size is calculated automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds value by specifying the query type (SOSL) and target (metadata objects), which aren't covered by annotations. It doesn't disclose rate limits or auth needs beyond the optional 'sf_user' parameter, but with annotations, the bar is lower, and it adds useful context without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and purpose without any wasted words. It's appropriately sized for the tool's complexity, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (searching metadata via SOSL), annotations cover safety and idempotency, and schema coverage is 100%. No output schema exists, but the description specifies the target (metadata objects), which helps infer return values. It's mostly complete, though adding more on response format or limitations could improve it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters ('query', 'sf_user', 'pageSize') with clear descriptions. The description doesn't add meaning beyond the schema, such as SOSL syntax examples or pagination details. Baseline is 3 when schema does the heavy lifting, and the description doesn't compensate further.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Run a SOSL search query'), target resource ('Salesforce Tooling API'), and scope ('to find matching metadata objects like Apex classes, triggers, and flows'). It distinguishes this tool from sibling tools like 'soql_query' and 'tooling_api_query' by specifying it's for SOSL searches against the Tooling API for metadata objects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'metadata objects like Apex classes, triggers, and flows,' which suggests when to use this tool (for searching metadata via SOSL). However, it doesn't explicitly state when to use it versus alternatives like 'soql_query' or 'tooling_api_query,' nor does it provide exclusions or prerequisites. The guidance is present but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
user_createAInspect
Create a new Salesforce user. You can clone an existing user by providing the template parameter, or create a new user from scratch by providing the profile and other parameters
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | The email of the user | ||
| profile | No | The name or ID of the profile to use for the new user. Not required if template is provided | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| lastName | Yes | The last name of the user | |
| template | No | The name, username, email or id of an existing user to use a a template for the new user | |
| username | Yes | The username of the user. Must be globally unique across all Salesforce organizations | |
| firstName | Yes | The first name of the user | |
| properties | No | An optional map of additional properties to set on the new user |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a write operation (readOnlyHint: false) and not idempotent (idempotentHint: false). The description adds useful context about the two creation methods but doesn't disclose additional behavioral traits like permission requirements, rate limits, or what happens on duplicate username/email conflicts.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste - the first states the purpose, the second explains the two usage patterns. Perfectly front-loaded and appropriately sized for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a user creation tool with comprehensive parameter documentation (100% schema coverage) and annotations covering key behavioral aspects, the description provides adequate context. However, without an output schema, it doesn't describe what the tool returns (e.g., user ID, success confirmation), which would be helpful for a creation operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 8 parameters thoroughly. The description adds marginal value by explaining the relationship between 'template' and 'profile' parameters, but doesn't provide additional semantic context beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Create') and resource ('new Salesforce user'), and distinguishes between two creation methods (cloning vs from scratch). It's specific and immediately tells what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use each approach (clone with template vs create from scratch with profile), but doesn't explicitly mention when NOT to use this tool or name specific alternatives among the sibling tools like user_update or user_describe.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
user_describeBRead-onlyIdempotentInspect
Return complete metadata for a Salesforce user
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | The name, username, email or ID of the user | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or what 'complete metadata' entails. It doesn't contradict annotations, but adds minimal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and wastes no space, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations cover safety and idempotency, and the schema fully describes parameters, the description is minimally adequate. However, with no output schema and no details on what 'complete metadata' includes or potential errors, there are gaps in helping the agent understand the full context of use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description doesn't add any semantic details beyond what the schema provides, such as examples or usage notes. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Return') and resource ('complete metadata for a Salesforce user'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'profile_describe' or 'sobject_describe', which might also return metadata about different Salesforce entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'profile_describe' for profile metadata or 'sobject_describe' for object metadata, leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
user_updateAIdempotentInspect
Update a Salesforce user, including activating, deactivating, freezing, unfreezing, resetting passwords, or modifying user properties
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | The id, username, full name or email of the user to update | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| operation | Yes | The type of operation to perform. Can be one of: - `deactivate`: To deactivate a user - `activate`: To activate a user - `reset_password`: To reset password of a user. An email to reset password will be sent to the user's email - `freeze`: To freeze a user - `unfreeze`: To unfreeze a user - `unlock_password`: To unlock password for a user - `update` | |
| properties | No | The properties to update when choosing the `update` operation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide hints (readOnlyHint=false, destructiveHint=false, idempotentHint=true, openWorldHint=true), but the description adds valuable context by listing specific operations (e.g., reset_password sends an email, update modifies properties). It does not contradict annotations and enhances understanding of behavioral traits beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and lists operations clearly. It avoids redundancy and wastes no words, though it could be slightly more structured by separating operation types for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple operations, nested objects) and lack of output schema, the description is adequate but has gaps. It covers what the tool does but does not explain return values, error handling, or dependencies. Annotations help, but more context on outcomes would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal semantics by mentioning 'modifying user properties' for the 'update' operation, but does not provide additional details beyond what the schema specifies. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Update') and resource ('Salesforce user'), and specifies the scope of operations including activating, deactivating, freezing, unfreezing, resetting passwords, and modifying properties. It distinguishes itself from sibling tools like 'user_create' and 'user_describe' by focusing on updates rather than creation or description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for updating Salesforce users but does not explicitly state when to use this tool versus alternatives. It mentions operations like 'update' for modifying properties, but lacks guidance on prerequisites (e.g., permissions), exclusions, or comparisons with other tools like 'user_create' or 'sobject_update'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
value_set_createBInspect
Create a new global value set in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name of the value set | |
| sorted | No | Set to true if the values should be sorted | |
| values | Yes | The values for the value set | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| namespace | No | The namespace, if applicable | |
| description | No | The description (optional). Max 255 characters | |
| masterLabel | No | The label (optional). Will default to the name if not specified |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this is a write operation (readOnlyHint: false) and not destructive (destructiveHint: false). The description adds that it creates something 'new' and specifies 'global' scope, which provides useful context beyond annotations. However, it doesn't mention permissions needed, rate limits, or other behavioral constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point. There's no wasted language or unnecessary elaboration - it states exactly what the tool does in minimal words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with comprehensive schema documentation but no output schema, the description is adequate but minimal. It identifies the resource being created but doesn't explain what happens after creation, what permissions are required, or how this differs from similar tools. The annotations help but don't fully compensate for the description's brevity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents all 7 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('new global value set in Salesforce'), making the purpose immediately understandable. It doesn't differentiate from sibling tools like 'value_set_update', but it's specific enough to understand what this tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'value_set_update' and 'metadata_create' available, there's no indication of when this specific value set creation tool is appropriate versus other creation or update operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
value_set_updateBIdempotentInspect
Update the values in a standard or global value set in Salesforce
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name or ID of the value set | |
| type | No | Indicate whether this is a global or standard value set. Not needed if the ID of the value set is provided | |
| values | Yes | The values for the value set | |
| sf_user | No | (optional) Salesforce username to identify the connection to use. Omit this to use the current default connection. | |
| namespace | No | The namespace for a global value set, if applicable |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a non-destructive, idempotent, open-world mutation tool (readOnlyHint=false, destructiveHint=false, idempotentHint=true, openWorldHint=true). The description adds minimal behavioral context beyond this - it doesn't explain what 'updating values' entails operationally, whether it replaces or merges values, or any Salesforce-specific constraints. However, it doesn't contradict the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and target, making it immediately scannable and understandable without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 5 parameters (including complex nested objects in 'values'), no output schema, and multiple sibling tools, the description is inadequate. It doesn't explain what constitutes a successful update, error conditions, Salesforce permissions required, or how this differs from value_set_create. The annotations help but don't compensate for these gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 5 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema - it doesn't explain the relationship between 'name' and 'type' parameters, the structure of 'values' array, or provide examples of valid value sets.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and target resource ('values in a standard or global value set in Salesforce'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its sibling 'value_set_create' or other metadata update tools, which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'value_set_create' or 'metadata_update'. There's no mention of prerequisites, constraints, or typical use cases, leaving the agent with insufficient context for appropriate tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!