Skip to main content
Glama

sales-team

Server Details

Public Settro sales MCP tools for missed-call ROI, direct-order recovery, social ordering, and fit.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.9/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: readiness assessment, recovery options comparison, loss estimation, and fit summary. The descriptions specify unique actions and criteria, making misselection unlikely.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (e.g., assess_social_ordering_readiness, compare_direct_order_recovery_options). The naming is uniform and predictable across all four tools.

Tool Count3/5

With only 4 tools, the set feels thin for a sales team domain, which might involve broader activities like lead management or reporting. However, it is well-scoped for Settro-specific restaurant assessments.

Completeness4/5

The tools cover key assessment and analysis aspects for Settro fit, but there are minor gaps such as lack of tools for direct sales actions (e.g., contact management or proposal generation) that might be expected in a sales context.

Available Tools

4 tools
assess_social_ordering_readinessBInspect

Score a restaurant's public social-ordering readiness using Settro's deterministic 100-point rubric.

ParametersJSON Schema
NameRequiredDescriptionDefault
pos_systemYes
has_facebook_pageYes
has_instagram_business_accountYes
publishes_promo_content_weeklyYes
can_reply_to_messages_within_5_minutesYes
wants_direct_orders_without_marketplace_commissionNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the scoring methodology ('deterministic 100-point rubric') but doesn't explain what the score represents, how it's calculated, whether it's a read-only operation, or what permissions might be required. For a tool with 6 parameters and no annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized for a scoring tool and front-loads the essential information (action, target, methodology).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters (5 required), 0% schema description coverage, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain the scoring output format, how parameters map to the rubric, or what behavioral characteristics (like side effects or permissions) are involved in the assessment process.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for all 6 parameters, the description provides no information about parameter meanings, relationships, or how they factor into the scoring rubric. The description mentions 'Settro's deterministic 100-point rubric' but doesn't explain how parameters like pos_system or wants_direct_orders_without_marketplace_commission affect the score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Score'), target resource ('a restaurant's public social-ordering readiness'), and methodology ('using Settro's deterministic 100-point rubric'). It distinguishes this tool from siblings by focusing on readiness assessment rather than comparison, estimation, or summary generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings (compare_direct_order_recovery_options, estimate_missed_call_loss, generate_settro_fit_summary). It doesn't mention prerequisites, alternatives, or exclusions, leaving the agent to infer usage context solely from the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_direct_order_recovery_optionsCInspect

Compare manual callback, marketplace redirect, and Settro direct-order recovery using public workflow criteria.

ParametersJSON Schema
NameRequiredDescriptionDefault
pos_systemYes
primary_channelYes
wants_direct_ordersNo
needs_social_dm_orderingNo
avoids_marketplace_commissionNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool performs a comparison but doesn't reveal any behavioral traits such as whether it's read-only or mutative, what the output format might be, if there are rate limits, or if it requires authentication. For a tool with 5 parameters and no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('compare') and the three options. There is no wasted language, and it directly conveys the tool's purpose without unnecessary elaboration, making it appropriately concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters with 0% schema coverage, no annotations, and no output schema), the description is incomplete. It fails to explain parameter meanings, behavioral aspects, or output expectations. While it clearly states the comparison purpose, it lacks the necessary context for effective tool invocation in a multi-parameter scenario without structured support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning none of the 5 parameters have descriptions in the schema. The tool description does not mention any parameters or explain what they mean (e.g., pos_system, primary_channel, or the boolean flags). It only references 'public workflow criteria' vaguely, which doesn't compensate for the lack of parameter documentation, leaving all parameters semantically unclear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'compare' and specifies the three recovery options being compared (manual callback, marketplace redirect, Settro direct-order recovery), along with the criteria used (public workflow criteria). It distinguishes this comparison tool from sibling tools that assess readiness, estimate loss, or generate summaries. However, it doesn't explicitly mention what resource or domain these recovery options apply to (e.g., order recovery for businesses).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools (assess_social_ordering_readiness, estimate_missed_call_loss, generate_settro_fit_summary). It mentions 'public workflow criteria' but doesn't explain what contexts or scenarios warrant this comparison, nor does it specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_missed_call_lossCInspect

Estimate public missed-call leakage for a restaurant using Settro's public calculator model.

ParametersJSON Schema
NameRequiredDescriptionDefault
pos_systemNo
calls_per_weekYes
average_order_value_usdNo
missed_call_rate_percentNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but lacks behavioral details. It mentions using a 'public calculator model' but doesn't disclose whether this is a read-only estimation, requires authentication, has rate limits, or what the output format might be, leaving key operational traits unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized for a simple tool, though it could be more front-loaded with key usage details to improve structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no output schema, and no annotations), the description is incomplete. It fails to explain parameter roles, output expectations, or behavioral context, making it inadequate for an agent to fully understand how to invoke and interpret results effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain the meaning of 'calls_per_week', 'pos_system', 'average_order_value_usd', or 'missed_call_rate_percent', leaving all four parameters semantically undocumented beyond the schema's basic constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('estimate') and resource ('public missed-call leakage for a restaurant'), and mentions the specific model ('Settro's public calculator model'). It distinguishes from siblings by focusing on missed-call estimation rather than social ordering or recovery options, though it doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance, only implying usage for restaurants needing to estimate missed-call leakage. It doesn't specify when to use this tool versus alternatives like 'compare_direct_order_recovery_options' or prerequisites such as data availability, leaving the agent with little context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_settro_fit_summaryCInspect

Summarize public Settro fit using POS compatibility, missed-call pain, social ordering interest, and direct-order channel needs.

ParametersJSON Schema
NameRequiredDescriptionDefault
pos_systemYes
needs_text_orderingYes
missed_calls_problemYes
social_ordering_interestYes
wants_month_to_month_pricingNo
needs_instagram_or_facebook_orderingYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'summarizes' based on input criteria, implying a read-only analysis, but doesn't clarify output format, potential side effects, or any behavioral traits like rate limits or authentication needs. For a tool with 6 parameters and no annotations, this lack of detail is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It lists the key criteria clearly, making it easy to parse. Every part of the sentence contributes directly to understanding the tool's function, with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, 0% schema coverage, no annotations, no output schema), the description is incomplete. It outlines what the tool does but lacks details on how it processes inputs, what the summary output looks like, or any behavioral context. For a tool with multiple parameters and sibling tools, more guidance is needed to ensure proper usage and understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions four criteria (POS compatibility, missed-call pain, social ordering interest, and direct-order channel needs) that map to some parameters in the schema (e.g., 'pos_system', 'missed_calls_problem', 'social_ordering_interest', 'needs_text_ordering', 'needs_instagram_or_facebook_ordering'). However, with 0% schema description coverage and 6 parameters, it doesn't fully explain all parameters (like 'wants_month_to_month_pricing') or provide syntax details. It adds some value but doesn't compensate fully for the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to summarize public Settro fit using four specific criteria (POS compatibility, missed-call pain, social ordering interest, and direct-order channel needs). It specifies the verb 'summarize' and the resource 'public Settro fit', making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'assess_social_ordering_readiness' or 'estimate_missed_call_loss', which might cover overlapping aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lists the criteria used for summarization but doesn't specify scenarios, prerequisites, or exclusions. With sibling tools like 'assess_social_ordering_readiness' and 'compare_direct_order_recovery_options', there's a clear need for differentiation, but the description offers no such context, leaving usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources