calculator
Server Details
Calculators accessible via MCP with real-time collaborative sessions and shareable URLs.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.1/5 across 10 of 10 tools scored.
Multiple tools have overlapping or ambiguous purposes. 'calculate', 'calculate_cas', and 'calculate_cas_headless' all perform calculations with subtle distinctions that could confuse an agent. 'calculate_cas' and 'calculate_cas_headless' are explicitly described as aliases, making them functionally identical and redundant in the tool set. This overlap creates clear ambiguity in tool selection.
The naming is mostly consistent with a verb_noun pattern, such as 'list_calculators', 'create_session', and 'get_session_state'. However, there are minor deviations like 'calculate' (verb only) and 'generate_prefilled_url' (verb_adjective_noun), which slightly break the pattern. Overall, the naming is readable and follows a predictable convention with only a few inconsistencies.
With 10 tools, the count is well-scoped for a calculator server that handles both basic calculations and interactive sessions. Each tool appears to serve a distinct functional role, such as listing calculators, managing sessions, and performing computations, making the number appropriate and not excessive or insufficient for the domain.
The tool set covers core calculator functionalities well, including calculation execution, session management (create, close, get state, push actions), and utility features like URL generation. A minor gap is the lack of a tool for deleting or managing persisted session snapshots beyond 'close_session', but agents can likely work around this with existing tools for most workflows.
Available Tools
10 toolscalculateCInspect
Run a calculation and get results + prefilled URL
| Name | Required | Description | Default |
|---|---|---|---|
| inputs | Yes | Calculator input values | |
| strict | No | If true, reject invalid or unknown input fields instead of dropping them | |
| calculator | Yes | Calculator slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions getting 'results + prefilled URL', implying a read operation with output, but fails to detail critical aspects like whether this is a read-only or mutating action, authentication needs, rate limits, or error handling. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with a single sentence that front-loads the main action ('Run a calculation') and includes an additional outcome ('get results + prefilled URL'). It avoids unnecessary words, though it could be more structured by separating purposes or adding brief context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a calculation tool with 3 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain the return values (e.g., what 'results' entail), how the prefilled URL is used, or prerequisites like needing a session from 'create_session'. This leaves the agent with insufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents parameters like 'calculator', 'inputs', and 'strict'. The description adds no additional meaning beyond what the schema provides, such as explaining what a 'calculator slug' is or how 'inputs' should be structured. Baseline score of 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Run a calculation and get results + prefilled URL', which provides a basic verb ('Run') and resource ('calculation') but lacks specificity about what type of calculation or how it differs from sibling tools like 'calculate_cas' or 'calculate_cas_headless'. It's vague about the exact purpose beyond a generic calculation operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'calculate_cas' or 'list_calculators'. The description mentions getting a prefilled URL, but it doesn't explain if this is for sharing results or other contexts, leaving the agent without clear usage instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_casAInspect
Evaluate one or more headless CAS expressions server-side (MCP-only numeric mode). Unsupported expressions return a GUI handoff link.
| Name | Required | Description | Default |
|---|---|---|---|
| strict | No | If true, reject unknown fields and malformed expressions instead of dropping them | |
| expression | No | Single CAS expression to evaluate | |
| expressions | No | Optional ordered list of CAS expressions to evaluate in one isolated context (supports assignments across the batch) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It successfully conveys execution environment ('server-side') and failure behavior (GUI handoff links), but omits details about side effects, session state interaction, or success output format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two dense sentences with zero waste. Front-loaded with core action, parenthetical clarifies mode, second sentence covers failure path. Excellent structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a calculation tool with 3 parameters and no output schema. Covers primary function and error handling, but could strengthen completeness by describing successful output format and clarifying relationship to session management siblings given the 'isolated context' parameter in schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (strict, expression, expressions all well-documented), establishing baseline 3. Description adds 'one or more' reinforcing the single-vs-batch relationship, but doesn't compensate for the lack of output schema or clarify the mutual exclusivity pattern beyond the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Evaluate') and resource ('headless CAS expressions'). Distinguishes general scope with 'server-side' and 'MCP-only numeric mode', though it doesn't explicitly differentiate from the similarly named sibling 'calculate_cas_headless'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context ('MCP-only numeric mode') and provides failure-mode guidance ('Unsupported expressions return a GUI handoff link'), but lacks explicit comparison against siblings like 'calculate' or 'calculate_cas_headless' to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_cas_headlessAInspect
Alias of calculate_cas. Headless MCP-only CAS evaluator; if unsupported features are requested, returns GUI handoff details.
| Name | Required | Description | Default |
|---|---|---|---|
| strict | No | If true, reject unknown fields and malformed expressions instead of dropping them | |
| expression | No | Single CAS expression to evaluate | |
| expressions | No | Optional ordered list of CAS expressions to evaluate in one isolated context (supports assignments across the batch) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden and succeeds well: it reveals the headless nature, MCP-only interface constraint, and critical fallback behavior (GUI handoff details for unsupported features). Lacks explicit mention of session persistence or error states beyond this handoff, but covers the essential behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the alias relationship and core function; second provides the specific behavioral edge case. Every word earns its place—succinct and information-dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and absence of an output schema, the description adequately covers the tool's purpose, alias relationship, and fallback behavior. Would benefit from a brief note on successful return structure (e.g., returns evaluated results), but the handoff behavior documentation provides sufficient context for an agent to invoke the tool safely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'unsupported features' which contextually relates to the 'strict' parameter's rejection behavior, but does not elaborate on parameter syntax, validation rules, or the batch evaluation semantics implied by the 'expressions' array.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states it is an 'Alias of calculate_cas', identifies the resource (CAS), and uses the specific verb 'evaluator'. The 'Headless MCP-only' clause clearly distinguishes it from the presumably GUI-enabled calculate_cas sibling and generic calculate tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual signals for usage ('Headless MCP-only' indicates when to prefer this over GUI alternatives). Documents the fallback behavior ('returns GUI handoff details' when unsupported features are requested), implicitly guiding the agent toward the GUI variant for complex operations. Could explicitly name the GUI alternative tool, but the implication is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
close_sessionCInspect
Close a session and optionally persist a snapshot of its final state
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes | Session UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool closes a session and can persist a snapshot, but doesn't explain what 'close' entails (e.g., whether it terminates resources, requires permissions, or has side effects), nor details on the snapshot (e.g., format, storage). This is inadequate for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Close a session') and adds optional functionality without waste. It's appropriately sized for the tool's complexity, with every word contributing to understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation operation with no annotations and no output schema), the description is incomplete. It lacks details on behavioral traits (e.g., what happens during closure, snapshot specifics), usage context, and return values. This leaves significant gaps for an AI agent to understand how to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'session_id' documented as 'Session UUID'. The description adds no additional meaning beyond this, such as how to obtain the session ID or constraints on its validity. Given the high schema coverage, the baseline score of 3 is appropriate, as the schema handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Close a session') and mentions an optional outcome ('persist a snapshot of its final state'), which specifies what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'create_session' or 'get_session_state', which would require more specific context about when to choose this over others.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions an optional feature ('persist a snapshot'), but doesn't clarify scenarios for using it, prerequisites like needing an active session, or exclusions. With sibling tools like 'create_session' and 'get_session_state', this lack of context is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sessionCInspect
Create a new interactive session for any calculator and return the session URL
| Name | Required | Description | Default |
|---|---|---|---|
| metadata | No | Optional session metadata | |
| calculator | Yes | Calculator slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions creating a session and returning a URL but omits critical details like whether this is a mutating operation, if it requires authentication, what happens on failure, or if there are rate limits. For a creation tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and outcome without unnecessary words. Every part earns its place by specifying the verb, resource, and return value, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of session creation with no annotations and no output schema, the description is incomplete. It fails to explain what the session URL is used for, how sessions interact with other tools like 'push_session_action', or what behavioral traits (e.g., mutability, error handling) are involved. This leaves gaps for an agent to understand the tool's full context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('calculator' as a slug and 'metadata' as optional). The description adds no additional meaning beyond implying the calculator parameter is required for session creation. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new interactive session') and the resource ('for any calculator'), specifying the outcome ('return the session URL'). It distinguishes from siblings like 'close_session' or 'get_session_state' by focusing on creation rather than management or retrieval. However, it doesn't explicitly differentiate from 'generate_prefilled_url', which might overlap in purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'generate_prefilled_url' or 'calculate' tools. It lacks context on prerequisites, such as whether a calculator must be available or if sessions are needed for specific operations. This absence leaves the agent without clear usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_prefilled_urlBInspect
Generate a prefilled URL without running calculation
| Name | Required | Description | Default |
|---|---|---|---|
| inputs | Yes | Input values for URL | |
| strict | No | If true, reject invalid or unknown input fields instead of dropping them | |
| calculator | Yes | Calculator slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool generates a URL 'without running calculation', which hints at a read-only or non-destructive operation, but doesn't clarify authentication needs, rate limits, error handling, or what the generated URL looks like (e.g., format, expiration). For a tool with no annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded in a single sentence: 'Generate a prefilled URL without running calculation'. Every word earns its place by clarifying the action, resource, and key constraint. There's no redundancy or unnecessary elaboration, making it efficient for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, nested objects) and lack of annotations and output schema, the description is minimally adequate. It covers the basic purpose but lacks details on behavior, usage context, and output format. For a tool that generates URLs—potentially involving validation or session management—more context would be helpful, but it meets a bare minimum threshold.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters ('calculator', 'inputs', 'strict') with descriptions. The tool description doesn't add any parameter-specific details beyond what's in the schema, such as examples of 'calculator' slugs or 'inputs' structure. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Generate a prefilled URL without running calculation'. It specifies the verb 'generate' and the resource 'prefilled URL', and distinguishes it from calculation tools by noting 'without running calculation'. However, it doesn't explicitly differentiate from sibling tools like 'create_session' or 'get_calculator_schema' that might also involve URL generation or calculator interactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It implies this tool should be used when you need a URL but don't want to execute calculations, which distinguishes it from 'calculate' tools. However, it doesn't specify when to use this versus alternatives like 'create_session' or 'get_calculator_schema', nor does it mention prerequisites, constraints, or typical use cases. The guidance is too vague for effective tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_calculator_schemaCInspect
Get the input schema for a specific calculator
| Name | Required | Description | Default |
|---|---|---|---|
| calculator | Yes | Calculator slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states what the tool does without mentioning any behavioral traits such as whether it's a read-only operation, if it requires authentication, potential rate limits, or what the output format might be. This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no unnecessary words. It is front-loaded and efficiently conveys the core purpose without any fluff, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of retrieving a schema and the lack of annotations and output schema, the description is incomplete. It does not address what the tool returns (e.g., a JSON schema object), potential errors, or how it integrates with other tools like 'calculate'. This leaves the agent with insufficient information to use the tool effectively in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'calculator' documented as 'Calculator slug'. The description adds no additional meaning beyond this, as it does not explain what a 'calculator slug' is or provide examples. Given the high schema coverage, a baseline score of 3 is appropriate, as the schema already handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'input schema for a specific calculator', making the purpose evident. However, it does not explicitly differentiate from sibling tools like 'list_calculators', which might list available calculators rather than retrieve a schema, leaving room for slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. For example, it does not specify if this should be used before invoking 'calculate' to understand required inputs or how it relates to 'list_calculators' for selecting a calculator. This lack of context makes it less helpful for an agent in choosing the right tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_session_stateBInspect
Retrieve the current field values, computation transcript, and pending message queue for an active session
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes | Session UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses what data is retrieved but lacks critical behavioral details: it doesn't specify permissions needed, rate limits, error conditions (e.g., invalid session_id), or whether the operation is idempotent. For a read operation with no annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information ('Retrieve...') with no wasted words. It directly conveys the tool's purpose and scope, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (retrieving multiple data types), lack of annotations, and no output schema, the description is minimally complete. It specifies what is retrieved but omits details on return format, error handling, and behavioral constraints. This is adequate for basic understanding but leaves gaps for reliable agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'session_id' documented as 'Session UUID'. The description adds no additional meaning beyond this, such as format examples or validation rules. With high schema coverage, the baseline score of 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve') and the specific resources ('current field values, computation transcript, and pending message queue') for an 'active session'. It distinguishes from siblings like 'list_calculators' or 'create_session' by focusing on session state retrieval, though it doesn't explicitly contrast with close alternatives like 'get_calculator_schema'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for an 'active session' but provides no explicit guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., session must exist), exclusions, or comparisons to sibling tools like 'calculate' or 'push_session_action', leaving the agent to infer context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_calculatorsBInspect
List available calculators, optionally filtered by category
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by category (e.g., 'finance', 'math') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions optional filtering, which adds some context, but fails to describe key traits such as whether this is a read-only operation, how results are returned (e.g., pagination, format), or any rate limits or permissions required. This leaves significant gaps for an agent to understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose and key feature (optional filtering). It is front-loaded with the main action and wastes no words, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the basic purpose and filtering option, but lacks details on behavioral aspects like return format or operational constraints, which are needed for full contextual understanding despite the simple schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'category' fully documented in the schema. The description adds minimal value by mentioning optional filtering by category, but doesn't provide additional semantics beyond what the schema already states, such as examples of categories or usage nuances. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('List') and resource ('available calculators'), making it easy to understand what the tool does. However, it doesn't explicitly distinguish this tool from its siblings like 'get_calculator_schema' or 'create_session', which also relate to calculators, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning optional filtering by category, suggesting when to use this tool for filtered lists. However, it provides no explicit guidance on when to choose this over alternatives like 'get_calculator_schema' or other sibling tools, nor does it specify any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
push_session_actionCInspect
Push actions into a session's message queue (set fields, submit computation, trigger plot, etc.)
| Name | Required | Description | Default |
|---|---|---|---|
| actions | Yes | Array of SessionAction objects to push to the browser | |
| session_id | Yes | Session UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'push actions' but doesn't clarify if this is a read-only or mutating operation, what permissions are needed, how errors are handled, or what the response looks like. The description lacks details on side effects, rate limits, or any behavioral traits beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and provides illustrative examples. There's no wasted verbiage, and it's appropriately sized for a tool with a complex input schema. However, it could be slightly more structured by explicitly separating purpose from examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple action types, no annotations, no output schema), the description is inadequate. It doesn't explain what happens after actions are pushed, potential side effects, error conditions, or how to interpret results. For a mutation-heavy tool with diverse actions, more context on behavior and outcomes is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('session_id' and 'actions') thoroughly. The description adds minimal value by hinting at action types ('set fields, submit computation, trigger plot, etc.'), but this is largely redundant with the schema's enum for 'type'. No additional syntax or format details are provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Push actions into a session's message queue' with examples like 'set fields, submit computation, trigger plot, etc.' This specifies the verb ('push'), resource ('session's message queue'), and scope (various action types). However, it doesn't explicitly differentiate from sibling tools like 'calculate' or 'get_session_state', which likely serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions examples of actions but doesn't specify prerequisites, appropriate contexts, or exclusions. Given sibling tools like 'calculate' and 'get_session_state', there's no indication of when this tool is preferred or required, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!