sales-team
Server Details
Public Settro sales MCP tools for missed-call ROI, direct-order recovery, social ordering, and fit.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.9/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: readiness assessment, recovery options comparison, loss estimation, and fit summary. The descriptions specify unique actions and criteria, making misselection unlikely.
All tool names follow a consistent verb_noun pattern (e.g., assess_social_ordering_readiness, compare_direct_order_recovery_options). The naming is uniform and predictable across all four tools.
With only 4 tools, the set feels thin for a sales team domain, which might involve broader activities like lead management or reporting. However, it is well-scoped for Settro-specific restaurant assessments.
The tools cover key assessment and analysis aspects for Settro fit, but there are minor gaps such as lack of tools for direct sales actions (e.g., contact management or proposal generation) that might be expected in a sales context.
Available Tools
4 toolsassess_social_ordering_readinessBInspect
Score a restaurant's public social-ordering readiness using Settro's deterministic 100-point rubric.
| Name | Required | Description | Default |
|---|---|---|---|
| pos_system | Yes | ||
| has_facebook_page | Yes | ||
| has_instagram_business_account | Yes | ||
| publishes_promo_content_weekly | Yes | ||
| can_reply_to_messages_within_5_minutes | Yes | ||
| wants_direct_orders_without_marketplace_commission | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the scoring methodology ('deterministic 100-point rubric') but doesn't explain what the score represents, how it's calculated, whether it's a read-only operation, or what permissions might be required. For a tool with 6 parameters and no annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose without unnecessary words. It's appropriately sized for a scoring tool and front-loads the essential information (action, target, methodology).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 6 parameters (5 required), 0% schema description coverage, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain the scoring output format, how parameters map to the rubric, or what behavioral characteristics (like side effects or permissions) are involved in the assessment process.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for all 6 parameters, the description provides no information about parameter meanings, relationships, or how they factor into the scoring rubric. The description mentions 'Settro's deterministic 100-point rubric' but doesn't explain how parameters like pos_system or wants_direct_orders_without_marketplace_commission affect the score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Score'), target resource ('a restaurant's public social-ordering readiness'), and methodology ('using Settro's deterministic 100-point rubric'). It distinguishes this tool from siblings by focusing on readiness assessment rather than comparison, estimation, or summary generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings (compare_direct_order_recovery_options, estimate_missed_call_loss, generate_settro_fit_summary). It doesn't mention prerequisites, alternatives, or exclusions, leaving the agent to infer usage context solely from the tool name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_direct_order_recovery_optionsCInspect
Compare manual callback, marketplace redirect, and Settro direct-order recovery using public workflow criteria.
| Name | Required | Description | Default |
|---|---|---|---|
| pos_system | Yes | ||
| primary_channel | Yes | ||
| wants_direct_orders | No | ||
| needs_social_dm_ordering | No | ||
| avoids_marketplace_commission | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool performs a comparison but doesn't reveal any behavioral traits such as whether it's read-only or mutative, what the output format might be, if there are rate limits, or if it requires authentication. For a tool with 5 parameters and no annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('compare') and the three options. There is no wasted language, and it directly conveys the tool's purpose without unnecessary elaboration, making it appropriately concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters with 0% schema coverage, no annotations, and no output schema), the description is incomplete. It fails to explain parameter meanings, behavioral aspects, or output expectations. While it clearly states the comparison purpose, it lacks the necessary context for effective tool invocation in a multi-parameter scenario without structured support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 5 parameters have descriptions in the schema. The tool description does not mention any parameters or explain what they mean (e.g., pos_system, primary_channel, or the boolean flags). It only references 'public workflow criteria' vaguely, which doesn't compensate for the lack of parameter documentation, leaving all parameters semantically unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'compare' and specifies the three recovery options being compared (manual callback, marketplace redirect, Settro direct-order recovery), along with the criteria used (public workflow criteria). It distinguishes this comparison tool from sibling tools that assess readiness, estimate loss, or generate summaries. However, it doesn't explicitly mention what resource or domain these recovery options apply to (e.g., order recovery for businesses).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling tools (assess_social_ordering_readiness, estimate_missed_call_loss, generate_settro_fit_summary). It mentions 'public workflow criteria' but doesn't explain what contexts or scenarios warrant this comparison, nor does it specify prerequisites or exclusions for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_missed_call_lossCInspect
Estimate public missed-call leakage for a restaurant using Settro's public calculator model.
| Name | Required | Description | Default |
|---|---|---|---|
| pos_system | No | ||
| calls_per_week | Yes | ||
| average_order_value_usd | No | ||
| missed_call_rate_percent | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but lacks behavioral details. It mentions using a 'public calculator model' but doesn't disclose whether this is a read-only estimation, requires authentication, has rate limits, or what the output format might be, leaving key operational traits unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized for a simple tool, though it could be more front-loaded with key usage details to improve structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, no output schema, and no annotations), the description is incomplete. It fails to explain parameter roles, output expectations, or behavioral context, making it inadequate for an agent to fully understand how to invoke and interpret results effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain the meaning of 'calls_per_week', 'pos_system', 'average_order_value_usd', or 'missed_call_rate_percent', leaving all four parameters semantically undocumented beyond the schema's basic constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('estimate') and resource ('public missed-call leakage for a restaurant'), and mentions the specific model ('Settro's public calculator model'). It distinguishes from siblings by focusing on missed-call estimation rather than social ordering or recovery options, though it doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance, only implying usage for restaurants needing to estimate missed-call leakage. It doesn't specify when to use this tool versus alternatives like 'compare_direct_order_recovery_options' or prerequisites such as data availability, leaving the agent with little context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_settro_fit_summaryCInspect
Summarize public Settro fit using POS compatibility, missed-call pain, social ordering interest, and direct-order channel needs.
| Name | Required | Description | Default |
|---|---|---|---|
| pos_system | Yes | ||
| needs_text_ordering | Yes | ||
| missed_calls_problem | Yes | ||
| social_ordering_interest | Yes | ||
| wants_month_to_month_pricing | No | ||
| needs_instagram_or_facebook_ordering | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'summarizes' based on input criteria, implying a read-only analysis, but doesn't clarify output format, potential side effects, or any behavioral traits like rate limits or authentication needs. For a tool with 6 parameters and no annotations, this lack of detail is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It lists the key criteria clearly, making it easy to parse. Every part of the sentence contributes directly to understanding the tool's function, with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (6 parameters, 0% schema coverage, no annotations, no output schema), the description is incomplete. It outlines what the tool does but lacks details on how it processes inputs, what the summary output looks like, or any behavioral context. For a tool with multiple parameters and sibling tools, more guidance is needed to ensure proper usage and understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions four criteria (POS compatibility, missed-call pain, social ordering interest, and direct-order channel needs) that map to some parameters in the schema (e.g., 'pos_system', 'missed_calls_problem', 'social_ordering_interest', 'needs_text_ordering', 'needs_instagram_or_facebook_ordering'). However, with 0% schema description coverage and 6 parameters, it doesn't fully explain all parameters (like 'wants_month_to_month_pricing') or provide syntax details. It adds some value but doesn't compensate fully for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to summarize public Settro fit using four specific criteria (POS compatibility, missed-call pain, social ordering interest, and direct-order channel needs). It specifies the verb 'summarize' and the resource 'public Settro fit', making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'assess_social_ordering_readiness' or 'estimate_missed_call_loss', which might cover overlapping aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It lists the criteria used for summarization but doesn't specify scenarios, prerequisites, or exclusions. With sibling tools like 'assess_social_ordering_readiness' and 'compare_direct_order_recovery_options', there's a clear need for differentiation, but the description offers no such context, leaving usage ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!