Skip to main content
Glama

Server Details

Salesforce MCP Pack

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-salesforce
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 13 of 13 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation3/5

Salesforce tools are well-separated by verb_noun pattern (e.g., sf_create_record vs sf_delete_record), but pipeworx tools like ask_pipeworx and discover_tools have overlapping purposes (both involve querying tools, but ask_pipeworx is more general). Memory tools (forget, recall, remember) are distinct but share a 'memory' theme.

Naming Consistency4/5

Salesforce tools follow a consistent 'sf_verb_noun' pattern (e.g., sf_create_record, sf_query). Pipeworx and memory tools use different conventions (ask_pipeworx, discover_tools, remember/recall/forget). Overall mostly consistent but with two distinct naming styles.

Tool Count5/5

13 tools is appropriate for a server combining a CRM (Salesforce) with a knowledge/query interface (Pipeworx) and memory. Each tool serves a clear purpose, and the count is well-scoped for the domain.

Completeness4/5

Salesforce tools cover create, read, update, delete, describe, query, search, and list objects — comprehensive CRUD. Pipeworx adds discovery and natural language query. Missing might be bulk operations or more advanced Salesforce features, but core workflows are complete.

Available Tools

13 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description transparently explains the tool's behavior: it picks the right tool, fills arguments, and returns the result. Since no annotations are provided, the description carries the full burden and does so well. However, it could mention potential limitations, such as latency or dependency on other tools, for a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, with three sentences that each add value. It front-loads the core purpose and includes examples for clarity. No unnecessary words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema), the description is sufficiently complete. It covers purpose, usage, and behavior. However, it does not mention the return format or potential errors, which would make it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage for the single parameter 'question', so the baseline is 3. The description adds value by explaining that the question should be in natural language and providing examples, which goes beyond the schema's generic description. This earns a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it accepts a plain English question and returns an answer from the best available data source. It distinguishes itself from sibling tools by emphasizing natural language input and automatic tool selection, making its purpose specific and unique.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use this tool: when you want to ask a question in plain English without needing to browse tools or learn schemas. It includes examples of appropriate queries, but does not explicitly mention when not to use it or suggest alternatives, which would have earned a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states that it 'Returns the most relevant tools with names and descriptions,' which adds context beyond the schema. However, no annotations are provided, so the description carries full burden. It could mention that it searches by semantic similarity or that results are ordered by relevance, but it's still clear about the behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each providing distinct value: what it does, what it returns, and when to use it. No fluff or redundancy. It is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete. It covers purpose, return value, and usage context. No output schema exists, so return format isn't expected. The description does not need to elaborate further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for both parameters ('query' and 'limit'). The description does not need to re-explain them, but it adds context by giving example queries ('analyze housing market trends') and default/max limits. This provides additional meaning beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Search', 'Returns') and a clear resource ('Pipeworx tool catalog'). It explicitly states the tool's purpose: finding relevant tools by describing needs. This distinguishes it from sibling tools like 'ask_pipeworx' (which is for general questions) and tool-specific tools like 'sf_create_record'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This tells the agent when to use the tool (first, when many tools are available) and implies it's not for other tasks. No alternatives are listed, but the 'FIRST' directive is strong enough to guide usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It correctly indicates a destructive operation ('Delete'), but lacks details on whether deletion is permanent, reversible, or what happens if the key doesn't exist. It adds some value beyond the schema but is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the action and resource. It is appropriately sized for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (one parameter, no output schema), the description is adequate but could mention whether deletion is idempotent or errors on missing keys. It is minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description does not need to add much. It mentions 'by key' which aligns with the schema's description. The description adds no extra meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete'), the target resource ('stored memory'), and the parameter ('by key'). It distinguishes itself from sibling tools like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a memory needs to be removed, but provides no guidance on when not to use it or alternatives. Sibling tools 'remember' and 'recall' are obvious alternatives, but they are not mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clearly states that omitting key lists all memories, which is a key behavioral trait. However, it does not mention if the operation is read-only, or any other side effects, but given the tool's nature, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the action, and no unnecessary words. Every sentence provides essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (1 optional param, no output schema, no nested objects), the description is complete enough. It explains both modes of operation. However, it could mention that the retrieved memory is returned in some format, but without an output schema, the description adequately covers the behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the only parameter 'key' is described in the schema). The description adds value by explaining the behavior when key is omitted, which goes beyond the schema's 'omit to list all keys' note. However, the schema already covers the basic semantics, so baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It distinguishes itself by describing the behavior with and without the key parameter, which differentiates it from sibling tools like 'forget' or 'remember'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says to use this tool to retrieve context saved earlier, which implies when to use it, but does not explicitly state when not to use it or provide alternatives among siblings. No guidance on when to use 'list all' versus 'retrieve by key' is given beyond the schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries the burden. It discloses that data persists per session, with different retention for authenticated (persistent) vs anonymous (24 hours). It does not mention overwrite behavior, size limits, or any destructive actions, but the description is truthful and adds useful context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first states action, second gives usage guidance, third discloses persistence behavior. No wasted words, information is front-loaded, and each sentence serves a distinct purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple key-value store (no output schema, no nested objects), the description covers purpose, usage, and retention. It lacks details on overwriting or retrieving, but those are covered by siblings 'recall' and 'forget'. For a straightforward tool, this is complete enough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with good descriptions for both 'key' (usage examples) and 'value' (type and examples). The description adds a general purpose but does not explain semantics beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action: 'Store a key-value pair in your session memory.' It identifies the resource as 'session memory' and specifies the use case: saving findings, preferences, or context. This distinguishes it from siblings like 'recall' (retrieve) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this to save intermediate findings, user preferences, or context across tool calls.' It also notes persistence differences for authenticated vs anonymous users, providing context for when the tool is appropriate. However, it does not explicitly state when NOT to use it or mention alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sf_create_recordBInspect

Create a new Salesforce record. Specify object type (e.g., 'Contact') and field values. Returns the new record ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsYesField name/value pairs (e.g., {"Name": "Acme", "Industry": "Tech"})
objectYesSObject type (e.g., "Account", "Contact")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must disclose behavioral traits. The description indicates a write operation, but does not mention side effects (e.g., triggers, required permissions) or return values. It is minimally acceptable but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should explain what is returned. It does not. For a simple creation tool, the description is adequate but lacks detail on success/failure indicators.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both 'object' and 'fields' described adequately in the schema. The description does not add extra parameter info, but the schema already provides sufficient meaning for the agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Create a new Salesforce record' uses a specific verb 'Create' and resource 'Salesforce record', which clearly states its action. However, it does not distinguish itself from sibling tools like 'sf_update_record' or 'sf_delete_record', but the verb alone is sufficient to differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as 'sf_update_record' or 'sf_get_record'. The description does not mention prerequisites, context, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sf_delete_recordCInspect

Delete a Salesforce record by ID. Specify object type and record ID. Returns success status.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesSalesforce record ID
objectYesSObject type (e.g., "Account")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It does not indicate that deletion is irreversible, require confirmation, or specify what happens to related data. The description merely repeats the tool's purpose without adding behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that directly states the tool's purpose. No unnecessary words or information are included.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool performs a destructive action (delete), the description lacks important context such as irreversibility, permission requirements, or return value behavior. The presence of sibling tools also suggests a need for clearer differentiation, which is absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already fully describes both parameters (id and object) with descriptions, so schema coverage is 100%. The description does not add further meaning beyond what the schema provides, earning a baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Delete' and the resource 'Salesforce record', and the input schema confirms it requires an object type and record ID. It distinguishes itself from sibling tools like 'sf_create_record' and 'sf_update_record' by its delete action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it does not mention that 'sf_update_record' or 'sf_get_record' might be more appropriate for non-deletion tasks, nor does it warn about the irreversible nature of deletes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sf_describeAInspect

Get schema details for a Salesforce object (e.g., 'Account'). Returns field names, types, relationships, and metadata. Use before querying to understand available fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
objectYesSObject type (e.g., "Account")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It describes what the tool returns ('fields, relationships, metadata') but does not disclose behavior like read-only nature, authentication needs, or whether it makes API calls. However, the read-only intent is clear from 'Describe'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Front-loaded with verb and resource, followed by specifics. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a simple input schema (1 param, no output schema) and the context of sibling tools, the description adequately explains the tool's purpose. It could mention that the output is a schema description, but the term 'Describe' implies this. No major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'object' having description 'SObject type (e.g., "Account")'. The description adds 'fields, relationships, metadata' as output context but doesn't add new meaning to the parameter itself. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Describe' and the resource 'Salesforce SObject schema' along with specific elements like 'fields, relationships, metadata'. This distinguishes it from siblings like sf_list_objects (which lists objects) and sf_query (which queries records).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing schema information, but lacks explicit when-to-use vs alternatives. Given siblings like sf_list_objects and sf_query, it would benefit from mentioning that this is for schema discovery, not data querying.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sf_get_recordBInspect

Fetch a single Salesforce record by ID. Specify object type (e.g., 'Account', 'Contact', 'Opportunity') and record ID. Returns all fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesSalesforce record ID
fieldsNoComma-separated field names (optional)
objectYesSObject type (e.g., "Account", "Contact", "Opportunity")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must fully convey behavior. It does not disclose whether this is a read-only operation (though implied by 'Get'), any rate limits, or what happens if the record is not found. It lacks details on error responses or field format expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, clear sentence that conveys the core purpose without unnecessary words. Every word is essential.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple parameters, the description is adequate but not thorough. It explains the primary action but omits details like return format (e.g., full record or specified fields) and error handling. For a simple get-by-ID tool, this is minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (all parameters described), so baseline is 3. The description does not add semantic value beyond the schema descriptions; it merely repeats 'object type' and 'ID'. No examples or formatting guidance are provided for the optional 'fields' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get), resource (a single Salesforce record), and identification method (by object type and ID). It distinguishes from siblings like sf_query, sf_search, and sf_list_objects, which return multiple records.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like sf_query or sf_search. For example, sf_get_record is for retrieving a specific record by ID, while sf_query is for custom SOQL queries. The description does not mention prerequisites (e.g., knowing the record ID) or when to use optional fields.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sf_list_objectsAInspect

List all SObject types available in your Salesforce org. Returns object names and labels. Use to discover queryable objects.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It states the tool lists all SObject types, which is a read-only operation, but does not disclose whether there are any limitations (e.g., only standard objects, or performance considerations). The behavior is straightforward but lacks additional context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and front-loaded, containing no unnecessary words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, no output schema, and simple behavior, the description is largely complete. It explains the tool lists all SObject types, which is sufficient for an agent to invoke it. Minor omission: not stating that the output is a list of object names, but since no output schema exists, the description adequately covers the return type implicitly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters and schema description coverage is 100%, so the schema fully defines the parameter space. The description adds no parameter-specific meaning, which is acceptable since there are no parameters to document. A baseline of 3 applies, but the description's clarity about listing 'all' types adds marginal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all available Salesforce SObject types in the org, using a specific verb (list) and resource (SObject types). It is distinct from sibling tools like sf_create_record or sf_query, which perform different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives, but since it has no parameters and is a simple listing, usage is implied as a first step before using other Salesforce tools. No exclusion criteria or context is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sf_queryBInspect

Query Salesforce records using SOQL. Returns matching records with all requested fields. Use sf_describe first to learn available fields for your object.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSOQL query (e.g., "SELECT Id, Name FROM Account LIMIT 10")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It does not disclose read-only behavior, that it only reads data (inferred from 'query'), nor any potential errors or side effects. It lacks detail on return format (e.g., single record vs list) or pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short, clear sentences with no fluff. All information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter and lack of output schema, the description is adequate but minimal. It does not mention how results are returned (list, single object) or any query limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for the 'query' parameter including an example. The description adds no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool executes a SOQL query against Salesforce and returns matching records. It distinguishes itself from sibling tools like sf_search (likely a different search type) and sf_describe (metadata retrieval).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like sf_search or sf_get_record. No mention of limitations (e.g., query row limits, governor limits).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sf_update_recordCInspect

Update an existing Salesforce record by ID. Specify object type and field values to change. Returns success status.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesSalesforce record ID
fieldsYesField name/value pairs to update
objectYesSObject type (e.g., "Account")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It does not disclose that updates are destructive (overwrites fields), whether partial updates are supported, or if any fields are immutable. The behavior is implied but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is efficient and front-loaded. No wasted words, but could be slightly more specific without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 required params) and lack of output schema, the description is too brief. It lacks context about partial updates, error handling, or field validation, which are important for an update operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter has a description. The tool description adds no extra meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action ('Update') and resource ('existing Salesforce record'). It distinguishes from sibling tools like sf_create_record and sf_delete_record by specifying update, but could be more precise by noting it updates by ID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like sf_get_record (to retrieve before update) or sf_describe (to check field validity). The description does not mention prerequisites such as needing the record ID or field names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.