Skip to main content
Glama

QueueSim

Server Details

Run M/M/c queue simulations and four scenarios (call center, ER, coffee shop, single server).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool serves a clearly distinct purpose: listing vs describing scenarios, basic vs advanced educational content, and generic vs scenario-specific simulation. The descriptions explicitly guide when to use each, eliminating ambiguity.

Naming Consistency5/5

All tool names follow the consistent verb_noun pattern in snake_case (e.g., describe_scenario, simulate_mmc). The verbs are appropriately varied (describe, explain, list, simulate) and clearly indicate action.

Tool Count5/5

With 6 tools, the server is well-scoped for queue simulation education and execution. It provides scenario exploration, theoretical explanation, and both generic and preset simulation without unnecessary bloat.

Completeness4/5

The set covers the full workflow: discover scenarios, understand theory, run simulations. Minor gaps like scenario comparison or result export are not critical for the intended use, and custom modeling is delegated externally.

Available Tools

9 tools
compare_scenariosA
Read-only
Inspect

Run two M/M/c configurations and return their summaries side-by-side with a delta object. Use this for clean before/after comparisons — 'what does adding 1 server do?' / 'how does the wait change if service speeds up?'. Eliminates the LLM-side pattern of calling simulate_mmc twice and computing the delta inline; one call returns both runs and the deltas already calculated. Provide scenarioA and scenarioB as MMC inputs (same shape as simulate_mmc); optionally include human labels for each so the response echoes them back.

ParametersJSON Schema
NameRequiredDescriptionDefault
labelANoOptional human label for scenario A (e.g. 'current', '3 agents'). Echoed back in the response.
labelBNoOptional human label for scenario B.
scenarioAYesMMC config A. Same shape as simulate_mmc inputs.
scenarioBYesMMC config B.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral context beyond annotations: it states the tool returns both runs and pre-computed deltas. Annotations already indicate readOnlyHint=true, and the description aligns with that. It does not fully detail the output structure, but the key behavioral aspects are covered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that efficiently conveys purpose, usage guidance, benefit, and parameter hints. It is front-loaded with the core action. Could be slightly more structured, but no unnecessary sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (nested objects, no output schema) and the presence of sibling tools like simulate_mmc, the description provides sufficient context: what the tool does, when to use it, and how inputs relate to a known tool. It does not detail the exact output, but the delta and summary are mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the description adds semantic value by explaining that scenarioA and scenarioB have the same shape as simulate_mmc inputs and that labels are optional and echoed back. The description reinforces parameter purpose without duplicating schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run two M/M/c configurations and return their summaries side-by-side with a delta object.' It uses specific verbs and identifies the resource (M/M/c configurations). It distinguishes from siblings like simulate_mmc by emphasizing the comparison feature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage context with examples ('what does adding 1 server do?') and explains the benefit over calling simulate_mmc twice. It does not include explicit 'when not to use' guidance, but the provided context is sufficient for typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

describe_scenarioA
Read-onlyIdempotent
Inspect

Return full details for one preset scenario: title, description, teaching note, peak parameters, and per-hour arrival + staffing arrays. Use this before simulate_scenario to understand the default shape and what overrides make sense.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesScenario key from list_scenarios.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds value by detailing the returned data fields, consistent with the read-only nature, and does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences: first lists return contents, second provides usage guidance. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one enum parameter, no output schema), the description fully covers purpose, return data, and usage context, making it complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear enum description. The description does not add extra meaning for the single parameter beyond what the schema provides, hence a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it returns full details for one preset scenario, listing specific components (title, description, teaching note, peak parameters, arrays), and distinguishes from sibling tools like simulate_scenario and list_scenarios.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises 'Use this before simulate_scenario to understand the default shape', providing clear context for when to use, though it does not exclude other sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

explain_advanced_patternsA
Read-onlyIdempotent
Inspect

Return a textbook-level description of six queueing complexity patterns beyond basic M/M/c: abandonment/reneging, priority tiers, overflow routing, skills-based routing, compound service, and server outages. Use this when the user describes real-world complexity (customers hanging up, VIP queues, specialist escalation, agent breaks, transfers) that plain M/M/c doesn't model. The tool frames each pattern conceptually and points users at ChiAha for custom modeling.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint, so the tool is safe. Description adds that it is conceptual (not a simulation) and references ChiAha for custom modeling, providing behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently convey the tool's action, patterns covered, and usage context. Front-loaded with verb and resource, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description fully covers what the tool does, when to use it, and its output (conceptual descriptions with pointers). No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, schema coverage is 100%, so the description does not need to add parameter info. Baseline for no parameters is 4, as there is no missing information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool returns textbook-level descriptions of six advanced queueing patterns (e.g., abandonment, priority tiers) beyond basic M/M/c. It differentiates from siblings by specifying it covers real-world complexity not modeled by plain M/M/c.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when the user describes real-world complexity that plain M/M/c doesn't model' and provides concrete examples. Points users to ChiAha for custom modeling, implying when not to use. Could be more explicit about alternatives, but good enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

explain_queueing_theoryA
Read-onlyIdempotent
Inspect

Return a ~500-word educational explainer of M/M/c queueing theory: Little's Law, utilization, why averages mislead, how simulation relates to Erlang-C. No inputs. Use this when the user asks a conceptual 'why' or 'how does this work' question rather than asking for a number.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare it read-only and non-destructive. Description adds value by specifying output is ~500-word educational text and listing content topics, which enriches the behavioral understanding beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: content summary, input absence, usage guidance. Every sentence adds value with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and minimal schema/annotations, the description fully covers purpose, content, usage, and input expectations. No gaps remain for an agent to misuse.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist; baseline for 0-param tools is 4. Description correctly states 'No inputs,' matching schema and providing no need for additional param detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns an educational explainer, lists specific topics (Little's Law, utilization, etc.), and distinguishes from siblings like simulate_mmc which are for numerical queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this when the user asks a conceptual why or how does this work question rather than asking for a number,' providing clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

interpret_resultA
Read-onlyIdempotent
Inspect

Given an M/M/c configuration (arrivalRate, serviceRate, servers) and optionally an observed average wait, returns a queueing-theory framed interpretation: where you sit on the utilization curve, what ρ means in plain language, what one more or fewer server would qualitatively do, and which complexity factors (priority, abandonment, skills routing) might be hiding in real data the M/M/c model can't see. Use this to TEACH while answering — when the user wants context around a number, not just the number itself. Pure text computation, no simulation, no RNG — deterministic output.

ParametersJSON Schema
NameRequiredDescriptionDefault
serversYesServer count (c).
arrivalRateYesMean arrivals per hour (λ).
serviceRateYesMean customers one server finishes per hour (μ).
observedAvgWaitMinutesNoOptional. The avg wait the user observed (from simulate_mmc, an Erlang-C calculator, or real measurements). If omitted, the tool computes ρ from the inputs and gives a parameter-only interpretation.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, destructiveHint. Description adds beyond: 'Pure text computation, deterministic output, no simulation, no RNG'. Consistently supplements annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One dense paragraph with front-loaded purpose. Some repetition in listing returns, but overall efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description thoroughly lists what is returned (utilization interpretation, plain-language ρ, server count impact, hidden complexity factors). Also covers optional parameter use-case. Complete for a teaching tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers all parameters (100% coverage). Description adds value by explaining optional parameter behavior (observedAvgWaitMinutes) and how omission leads to ρ-only interpretation. Not redundant.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool interprets M/M/c configurations with queueing-theory context. Distinguishes from siblings by emphasizing teaching over raw numbers and specifying no simulation, no RNG.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly guides to use this for teaching context around numbers. Implicitly contrasts with simulation tools (simulate_mmc) by stating 'no simulation'. Could name alternatives directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_scenariosA
Read-onlyIdempotent
Inspect

List the four pre-built QueueSim scenarios. Returns key, title, and one-line description for each (Single Server, Coffee Shop, ER Waiting Room, Call Center). Call this when the user's problem matches one of the preset shapes — use describe_scenario for more detail and simulate_scenario to run one.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, destructiveHint. Description adds specific return fields and lists the four scenarios, providing context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, followed by usage guidance. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description fully explains what is returned and lists all four scenarios. Complete for a simple list tool with no parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in schema, so baseline is 4. Description adds value by clarifying the return structure, though not needed for param semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it lists four pre-built scenarios and specifies what is returned (key, title, description). Distinguishes from siblings like describe_scenario and simulate_scenario.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use (when user's problem matches a preset shape) and provides alternatives (describe_scenario, simulate_scenario).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recommend_staffingA
Read-only
Inspect

INVERSE of simulate_mmc — given an arrival rate, service rate, and a target average wait time, returns the SMALLEST number of servers needed to meet the target. Use this when the user asks 'how many servers do I need?' / 'what staffing keeps wait under N minutes?'. The tool runs a binary search over candidate server counts (up to maxServers, default 50), invoking the simulator for each candidate. Saves Claude from iterating simulate_mmc 3-5 times by hand. If even maxServers servers can't meet the target, the recommendation is null and the response includes the achieved wait so Claude can explain that the target is infeasible at the given load.

ParametersJSON Schema
NameRequiredDescriptionDefault
maxServersNoSearch ceiling (default 50, max 50). If even this many servers can't meet the target, the tool returns null with the achieved wait.
serviceCoVNo
arrivalRateYesMean arrivals per hour (λ).
serviceRateYesMean customers one server can finish per hour (μ). Must be > 0.
simulationDaysNoDays to simulate per candidate (default 7). Lower = faster search; higher = less seed-to-seed variance.
arrivalDistributionNo
serviceDistributionNo
targetAvgWaitMinutesYesMaximum average wait time you're willing to accept, in minutes. The tool returns the smallest server count that meets this target.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes internal binary search algorithm, simulation invocation, and handling of infeasible targets. Annotations (readOnlyHint=true) are consistent with description. Adds valuable context beyond structured fields, such as saving Claude from manual iteration.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single well-organized paragraph but somewhat lengthy. Front-loaded with purpose, then usage, then internal details. Efficient for the amount of information provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, usage, and fallback. Lacks explicit mention of return format (though no output schema exists) and stochastic nature of simulation beyond seed-to-seed variance mentioned in simulationDays parameter. Adequate for a recommendation tool with this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds overall context (binary search, default max 50 servers) but does not provide additional meaning for parameters like serviceCoV, arrivalDistribution, or serviceDistribution beyond what the input schema offers. Schema coverage is 63%, so some parameters remain undocumented in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns the smallest number of servers needed to meet a target wait time, given arrival rate, service rate, and target. Explicitly positions as the inverse of simulate_mmc and provides example user queries. Distinguishes itself well from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: when user asks about required staffing or keeping wait under a threshold. Also explains fallback when target is infeasible (returns null with achieved wait). Provides clear guidance on when not to use (use simulate_mmc instead for forward simulation).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

simulate_mmcA
Read-only
Inspect

Run a generic M/M/c queue simulation. Provide an arrival rate (λ, arrivals/hour), a service rate per server (μ, customers/hour each server can finish), and a server count (c). Optional: distribution shapes, service coefficient of variation, run length. Returns per-hour metrics and an overall summary (avg wait, queue length, offered load, throughput). This is the primary tool for 'how many servers do I need?' / 'what's my average wait?' style questions. ALSO preferred over simulate_scenario for what-if questions about scheduled scenarios (Coffee Shop, ER) when the user wants flat uniform numbers — pull the peak params from describe_scenario and run them here. That usually matches user intent better than collapsing a schedule.

ParametersJSON Schema
NameRequiredDescriptionDefault
serversYesNumber of parallel servers (c). Integer 1-50.
serviceCoVNoCoefficient of variation for service time — used when serviceDistribution is 'Normal' or 'LogNormal'. Ignored for Exponential/Constant. Range 0-5.
arrivalRateYesMean arrivals per hour (λ). Any positive value up to 200.
serviceRateYesMean customers one server can finish per hour (μ). Must be > 0.
simulationDaysNoDays to simulate (default 7). Range 1-365.
arrivalDistributionNoShape of inter-arrival times. 'Exponential' = Poisson process (default). 'Constant' = evenly-spaced.
serviceDistributionNoShape of service-time distribution. 'Exponential' = classical M/M/c (default).
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds value by detailing returned metrics: per-hour metrics and an overall summary (avg wait, queue length, offered load, throughput). No contradictions; the description enriches the behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively long but efficiently front-loaded with core purpose and parameters. It lists returns and includes usage guidelines. Every sentence serves a purpose, though it could be slightly more concise without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema), the description covers all essential aspects: purpose, core and optional parameters, return values, and usage guidance relative to siblings. It provides sufficient context for an agent to invoke the tool correctly, even without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description summarizes parameters (arrival rate λ, service rate μ, server count c, optional distributions, CoV, run length) and reinforces units (per hour), but does not add significant new meaning beyond the schema's individual descriptions. It provides a helpful overview but does not exceed baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with 'Run a generic M/M/c queue simulation' and gives specific use cases ('how many servers do I need?', 'what's my average wait?'). It distinguishes from sibling simulate_scenario by stating it's preferred for flat uniform numbers. This provides clear verb+resource+scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states this is the primary tool for capacity and wait-time questions, and that it is preferred over simulate_scenario for scheduled scenarios when the user wants flat uniform numbers. It also advises to pull peak params from describe_scenario and run them here, providing clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

simulate_scenarioA
Read-only
Inspect

Run one of the four preset scenarios (single, coffee, er, callcenter) with optional overrides. Overrides apply UNIFORMLY across open hours — e.g. setting servers=5 on 'coffee' replaces the 4/6/4 staffing pattern with a flat 5 during open hours (closed hours stay at zero). Use this for (a) faithful reproduction of a scenario's defaults, or (b) uniform scaling (everywhere it was open, use these new numbers). Do NOT use this when the user wants to keep a scheduled scenario's shape but tweak just one part — there's no per-hour override here, and collapsing a 4/6/4 pattern to 5 often isn't what the user meant. For flat what-if analysis on scheduled scenarios, prefer simulate_mmc using peak params from describe_scenario.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesScenario key from list_scenarios.
overridesNoOptional overrides applied uniformly across open hours (closed hours preserved at zero for scheduled scenarios). All fields optional.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses a key behavioral trait: overrides apply uniformly across open hours, with closed hours staying at zero. This adds context beyond annotations (readOnlyHint=true is consistent). However, it does not describe the output or side effects, though the read-only nature is annotated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph of moderate length but each sentence adds value. It is well-structured: purpose first, then behavioral detail, then usage guidance. Slightly verbose but not excessive; could be tightened.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose and usage well but is missing return value information (no output schema exists). The agent might need to know what the tool returns. Given the complexity (2 params, no output schema), the description should mention output format or that it returns simulation results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents parameters. The description adds valuable meaning beyond the schema: explains the uniform application of overrides with an example ('servers=5 replaces 4/6/4 pattern with flat 5'). This helps the agent understand semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the exact action ('Run one of the four preset scenarios') with optional overrides, distinguishes from siblings by mentioning 'simulate_mmc' as an alternative for flat what-if analysis, and clearly identifies the resource (preset scenarios). The verb 'run' and noun 'scenario' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('faithful reproduction' or 'uniform scaling') and when not to use ('per-hour override'), with a direct alternative: 'For flat what-if analysis on scheduled scenarios, prefer simulate_mmc using peak params from describe_scenario.' This provides clear guidance for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources