Skip to main content
Glama

Server Details

Bar-first MCP server for Tabula chart authoring, PNG rendering, and editor handoff.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

15 tools
clear_bar_growthClear Bar GrowthCInspect

Remove the Bar growth complication from the current session.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action (Remove) and scope (current session) but fails to specify what happens if no growth complication exists, whether the operation is reversible, side effects on the session state, or auth/permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single 9-word sentence contains no wasted words or redundancy. However, for a mutation tool with 3 complex parameters and 0% schema coverage, this brevity crosses into underspecification rather than admirable conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (nested objects, mutations, output schema present), the 9-word description is inadequate. While it doesn't need to explain return values (output schema exists), it completely omits parameter semantics, prerequisites, and behavioral details necessary for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description provides zero parameter information. The tool accepts 3 complex nested parameters (dataset, sessionId, sessionSnapshot) with extensive nested objects, but the description offers no guidance on what dataset ID to provide, what sessionSnapshot structure is expected, or how these relate to the 'current session' mentioned.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb (Remove) + resource (Bar growth complication) + scope (current session). It clearly distinguishes from siblings: clear_bar_reference_line targets a different complication, while configure_bar_growth sets up growth rather than removing it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The verb 'Remove' implies usage when a growth complication exists, providing implied contrast with 'configure' alternatives. However, there is no explicit 'when to use' guidance, prerequisites (e.g., confirming growth exists), or error handling if called without an active growth complication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clear_bar_reference_lineClear Bar Reference LineCInspect

Remove the Bar reference line complication from the current session.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, placing full burden on description. It only discloses that the operation affects the 'current session' but fails to clarify if the operation is reversible, what happens when no reference line exists, or that it mutates session state destructively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 9-word sentence with zero redundancy. Extremely efficient structure, though arguably incomplete given the tool's complexity. Every word conveys essential information, but conciseness comes at the cost of necessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool accepts deeply nested objects (sessionSnapshot with required barSpec containing complications, mapping, filters, etc.) and has an output schema, yet description offers no orientation on parameter relationships, how to construct valid inputs from the session state, or what the operation returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for three complex parameters (including the massive nested sessionSnapshot object with barSpec, complications, etc.). Description mentions 'current session' providing minimal context for sessionId/sessionSnapshot but gives no guidance on the dataset parameter or the expected structure/content of the sessionSnapshot.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the specific action (Remove) and target (Bar reference line complication), accurately using domain terminology from the schema ('complication' property). Effectively distinguishes from sibling clear_bar_growth by specifying 'reference line' versus 'growth'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like configure_bar_reference_line (which modifies instead of removing), mentions no prerequisites despite requiring a complex sessionSnapshot object, and offers no error handling guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

configure_bar_growthConfigure Bar GrowthCInspect

Configure the Bar growth complication using the public MCP-facing payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
growthYes
datasetNo
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full responsibility for disclosing behavior. It only states 'Configure' without clarifying if this is idempotent, additive, or destructive to existing growth settings. No mention of validation rules, side effects on the bar visualization, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence and brief, but wastes words on implementation details ('public MCP-facing payload') that provide no actionable guidance to an AI agent. It is under-specified rather than appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the extreme complexity (massively nested sessionSnapshot object, 4 parameters, 0% schema coverage), the description is grossly inadequate. It does not compensate for the lack of schema descriptions or explain the required 'growth' object structure, despite having an output schema to handle return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, yet the description adds no parameter semantics whatsoever. The phrase 'public MCP-facing payload' vaguely references the parameter structure but fails to explain what 'growth', 'sessionSnapshot', or 'dataset' represent, or how the deeply nested objects should be constructed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool configures a 'Bar growth complication' which nominally distinguishes it from sibling tools like configure_bar_reference_line. However, it fails to explain what a 'growth complication' actually does (e.g., showing growth rates, trend comparisons) or what visual effect it produces, leaving the purpose jargon-heavy and incomplete.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives like clear_bar_growth, nor any mention of prerequisites (e.g., requiring an existing bar session). The description provides zero usage context despite having numerous sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

configure_bar_reference_lineConfigure Bar Reference LineCInspect

Configure the Bar reference line complication using the public MCP-facing payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
sessionIdNo
referenceLineYes
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and description fails to disclose behavioral traits such as whether this replaces existing reference lines, appends to them, or requires specific validation. Silent on side effects and return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is brief, but wastes limited space on meta-commentary ('public MCP-facing payload') rather than actionable guidance. Front-loaded action is good, but content is unhelpful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Severely inadequate for a tool with 4 complex parameters (including deeply nested objects like sessionSnapshot with 20+ properties) and no schema descriptions. Description provides no context needed to construct valid inputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description offers no compensation. It fails to explain the distinction between sessionId and sessionSnapshot, the purpose of the dataset object, or the structure of referenceLine.lines despite the extreme complexity of the nested schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

Identifies the resource (Bar reference line complication) and action (configure), but 'configure' is vague and the phrase 'using the public MCP-facing payload' is implementation detail that doesn't clarify purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus siblings like clear_bar_reference_line, or prerequisites such as requiring an existing session from create_bar_session.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_bar_calculated_fieldCreate Bar Calculated FieldCInspect

Create a calculated field inside the current Bar session dataset.

ParametersJSON Schema
NameRequiredDescriptionDefault
draftYes
datasetNo
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full disclosure burden. While 'Create' implies state mutation, the description fails to specify whether this is idempotent, whether it modifies the session immediately, or what validation occurs on the formula.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy, but severely under-specified for the tool's complexity. Given the rich nested schema, additional sentences explaining the core 'draft' parameter structure would be necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema (reducing the need to describe returns) and complex nested input structures, the description provides only the minimal functional statement. It inadequately covers the intricate draft object structure or session context requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 4 complex parameters including nested objects (draft, dataset, sessionSnapshot), the description fails to compensate. It does not mention the critical 'draft' object containing 'name' and 'formula', nor explain the sessionId requirement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Create), the resource (calculated field), and the scope (inside the current Bar session dataset). This distinguishes it from sibling 'create_bar_session' by implying an existing session context rather than session creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, nor prerequisites such as requiring an active Bar session first. No mention of error conditions like duplicate field names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_bar_sessionCreate Bar SessionAInspect

Create a new Tabula Bar chart authoring session from a dataset and optional BarSpec draft.

ParametersJSON Schema
NameRequiredDescriptionDefault
barSpecNo
datasetYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions 'authoring session'暗示ing editing context, but lacks disclosure on persistence, validation behavior, error handling, or session lifecycle management required for a creation operation with complex nested configuration.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single 12-word sentence with zero redundancy. Front-loaded action verb, every word earns its place, appropriately sized for the tool's surface area.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Acknowledges output schema existence (no need to describe returns), but with 0% input schema coverage and extreme parameter complexity (deeply nested barSpec with visual/filters/mapping configurations), the description provides only baseline viability. Barely sufficient for an agent to infer required versus optional configuration paths.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description minimally compensates by identifying both top-level parameters ('dataset' and 'BarSpec draft'). However, given extreme nesting complexity (56+ nested properties across sort/style/filters/mapping), it fails to elucidate required structure or relationships between fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Create' with clear resource 'Tabula Bar chart authoring session'. It effectively distinguishes from sibling tools like update/set/clear operations by establishing itself as the initialization entry point for bar chart sessions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context ('from a dataset') but lacks explicit guidance on when to use versus alternatives like open_bar_in_tabula or when session creation fails. No mention of prerequisites or workflow sequence among the 13 sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bar_control_surfaceGet Bar Control SurfaceCInspect

Return the current Bar control surface, dataset schema, warnings, and normalized session snapshot.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It mentions 'warnings' in the return value, which hints at validation behavior, but fails to explicitly state the read-only nature, error conditions (e.g., invalid sessionId), or whether the tool validates the input dataset against the session. It adequately describes the happy-path return payload but lacks operational safety details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that efficiently enumerates the four return components. No wasted words, though given the complexity of the input schema (three nested objects), additional sentences explaining inputs would have been valuable rather than excessive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has an output schema and the description lists the returned entities, satisfying output documentation needs. However, it completely omits the three complex input parameters despite 0% schema coverage. For a tool with such rich nested input objects (dataset rows/columns, barSpec configuration), this is a significant gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate, yet it mentions none of the three parameters (dataset, sessionId, sessionSnapshot). While the description lists outputs, it provides no guidance on how to construct the complex nested 'dataset' or 'sessionSnapshot' inputs or what 'sessionId' identifies, leaving the agent without parameter context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool returns specific resources (control surface, dataset schema, warnings, session snapshot) using a clear verb ('Return'). It implicitly distinguishes from siblings like 'set_bar_mapping' or 'configure_bar_growth' through the 'get' prefix, though it doesn't explicitly contrast with them. The term 'control surface' is slightly jargon-heavy but clarified by the listed return components.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus alternatives (e.g., whether to use this before 'set_bar_filters' to check current state, or when to use 'create_bar_session' instead). Usage must be inferred entirely from the 'get' naming convention versus sibling 'set'/'configure' verbs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

open_bar_in_tabulaOpen Bar In TabulaBInspect

Build a local Tabula editor handoff URL for the current Bar session without persisting a project.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
sessionIdNo
slideTitleNo
projectNameNo
editorBaseUrlNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
visualNo
messageNo
openUrlNo
urlHashNo
approvalNo
hashParamNo
operationNo
projectIdNo
routePathNo
sessionIdNo
urlLengthNo
slideTitleNo
projectNameNo
payloadBytesNo
editorBaseUrlNo
loginRequiredNo
guestSessionIdNo
upgradeRequiredNo
warningMessagesNo
persistedProjectNo
encodedPayloadLengthNo
requiresAuthenticationNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds critical behavioral detail about non-persistence and 'local' nature. However, missing: what opening the URL triggers, authentication requirements, session lifecycle impacts, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

single sentence, front-loaded with verb ('Build'), no redundancy. However, extreme brevity becomes under-specification given parameter complexity—needs additional sentences for the 6 complex objects.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Severely incomplete for a tool with 6 nested object parameters, 0% schema coverage, and complex sessionSnapshot structure. One sentence insufficient to guide agent through required parameter construction despite having output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% with 6 complex object parameters. Description mentions 'current Bar session' giving context to sessionId/sessionSnapshot, but provides no semantics for dataset, slideTitle, projectName, or editorBaseUrl. Insufficient compensation for undocumented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Build...URL), resource (Tabula editor), and scope (current Bar session). Distinguishes from siblings like render_bar_png or create_bar_session by focusing on external editor handoff, though assumes familiarity with 'Tabula' as an editor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides one key usage constraint ('without persisting a project'), implying when NOT to use it. Lacks explicit comparison to siblings or prerequisites (e.g., requires existing session), but the persistence constraint offers implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

render_bar_pngRender Bar PNGBInspect

Render the current Bar chart session to a PNG using Tabula headless export.

ParametersJSON Schema
NameRequiredDescriptionDefault
widthNo
heightNo
datasetNo
fileNameNo
sessionIdNo
sizePresetNo
backgroundColorNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
widthNo
heightNo
messageNo
approvalNo
fileNameNo
mimeTypeNo
warningsNo
operationNo
sessionIdNo
imageBase64No
loginRequiredNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'Tabula headless export' indicating the rendering mechanism, but fails to disclose side effects (file system impact, temporary vs persistent storage), authentication requirements, validation rules (e.g., checking canRender status), or error conditions. For a complex export operation with 8 undocumented parameters, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with the action verb front-loaded. No words are wasted. However, given the tool's high complexity (nested objects, 0% schema coverage), this extreme brevity contributes to underspecification rather than clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 8 parameters, complex nested objects, 0% schema coverage, no annotations, and an output schema present, the description is underweight. While the existence of an output schema excuses the lack of return value explanation, the description fails to explain parameter relationships (e.g., sessionId vs sessionSnapshot), required preconditions, or what constitutes a valid 'current session'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions none of the 8 parameters (width, height, sessionId, sessionSnapshot, etc.). While 'PNG' implies dimensions might be relevant, complex nested objects like sessionSnapshot and dataset receive no explanation. This is a significant gap given the schema complexity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Render), resource (Bar chart session), and output format (PNG). It identifies the specific implementation ('Tabula headless export'), which distinguishes it from the configuration-oriented siblings (set_bar_*, configure_bar_*). However, it does not explicitly contrast with siblings or state that this is typically a final step after configuration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'current Bar chart session' implies prerequisite state (an active session must exist), hinting at when to use the tool. However, it lacks explicit guidance on prerequisites (e.g., 'requires configured mapping'), exclusion conditions, or alternatives. With 15 siblings including create_bar_session, explicit sequencing guidance would be valuable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_bar_filtersSet Bar FiltersCInspect

Update Bar chart filters with the normalized public contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
filtersYes
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Only indicates mutation via 'Update'. Doesn't disclose idempotency, validation behavior, error handling, or whether this replaces or merges with existing filters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 9 words achieves brevity, but 'normalized public contract' wastes space with implementation jargon rather than useful guidance. Under-specified rather than efficiently concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for complexity: 4 parameters with deep nesting (filters, sessionSnapshot), zero schema descriptions, and no output explanation. Description doesn't explain how filters interact with dataset/columns or session state.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% and description fails to compensate. Mentions 'normalized public contract' vaguely hinting at filter structure, but doesn't explain the required 'filters' object semantics, nor the purpose of optional params (dataset, sessionId, sessionSnapshot).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the verb (Update) and resource (Bar chart filters), but includes obscure jargon ('normalized public contract') that doesn't clarify intent. Fails to distinguish from sibling tools like set_bar_mapping or set_bar_sort.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives, prerequisites (e.g., existing session), or expected workflow with other set_bar_* tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_bar_mappingSet Bar MappingCInspect

Update the Bar chart mapping for category, value, and group slots.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
mappingYes
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fails to disclose critical behavioral traits: whether this overwrites or merges with existing mappings, validation rules for the mapping constraints, or whether the operation triggers side effects like automatic re-rendering. The term 'Update' implies mutation but lacks specifics about state management.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no redundancy, but given the high complexity (4 parameters with deep nesting, 0% schema coverage), it is inappropriately terse. The critical information density is too low for the tool's sophistication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with complex nested parameters, no annotations, and significant state mutation, the description is incomplete. It does not explain parameter relationships (e.g., whether sessionId or sessionSnapshot takes precedence), valid mapping configurations, or error conditions. The output schema existence reduces the need for return value documentation, but input guidance remains severely lacking.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the description mentions the three mapping slots (category, value, group), it completely fails to explain the other three parameters (dataset, sessionId, sessionSnapshot) or the complex nested structure required. With 0% schema description coverage, the description must compensate by explaining that sessionSnapshot requires a full session state object and that mapping values are arrays of field tokens, but it does not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action (Update) and target (Bar chart mapping) and specifies the three slot types affected (category, value, group). However, it assumes understanding of 'mapping' in a data visualization context without distinguishing why one would use this versus other configuration tools like set_bar_filters or set_bar_sort.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, prerequisites (e.g., requiring an active session), or sequencing (whether render_bar_png must be called afterward). The description operates in isolation without workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_bar_orientationSet Bar OrientationCInspect

Update the Bar chart orientation using the public Bar contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
sessionIdNo
orientationYes
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure. It indicates mutation ('Update') but fails to explain side effects, idempotency, error conditions (e.g., invalid session), or what the output schema represents. The 'public Bar contract' phrase is opaque implementation detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is appropriately brief but wastes space on unhelpful implementation jargon ('public Bar contract'). Front-loaded with the action but omits critical usage context that would earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a complex stateful tool with 4 parameters, nested structures, and an output schema. Missing: required session state explanation, parameter relationships (sessionId vs sessionSnapshot), valid orientation values clarification, and behavioral outcomes. Description understates the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% with 4 complex parameters including nested objects (dataset, sessionSnapshot). The description adds zero parameter guidance—no explanation of sessionId format, what constitutes a valid sessionSnapshot, or how dataset relates to the existing session state. Fails to compensate for lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Update') and resource ('Bar chart orientation') that distinguishes it from siblings like set_bar_filters or set_bar_series_layout. However, it borders on tautology with the tool name and includes jargon ('public Bar contract') that doesn't clarify purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives, nor prerequisites like requiring an active session or valid sessionSnapshot. No mention of how it relates to sibling set_bar_* tools or whether orientation changes trigger re-rendering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_bar_series_layoutSet Bar Series LayoutCInspect

Update the Bar chart series layout using the public Bar contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
datasetNo
sessionIdNo
seriesLayoutYes
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. 'Update' implies mutation but lacks details on side effects, idempotency, or what occurs when switching between layouts (e.g., grouped to stacked_100). Does not explain that this affects how multiple data series render visually.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is not wasteful, but severely undersized for tool complexity. 'Using the public Bar contract' is meaningless filler. Front-loading is moot given extreme brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a tool with complex nested schemas and 4 parameters. While output schema exists (reducing description burden for returns), the input side lacks critical explanations for the bar chart configuration domain and its relationship to sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage with complex nested objects (dataset, sessionSnapshot). Description adds zero parameter guidance—does not explain sessionId vs sessionSnapshot, what seriesLayout values mean visually, or that dataset is optional context versus required input.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States it updates 'Bar chart series layout' which identifies the resource and action, but includes jargon ('public Bar contract') that adds no value. Fails to explain what 'series layout' means in practice (grouping vs stacking) despite clear enum values in schema.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus siblings like set_bar_orientation or set_bar_mapping. Does not mention prerequisites (e.g., existing session) or that this mutates chart state.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_bar_sortSet Bar SortCInspect

Update Bar chart sort settings with the normalized public contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortYes
datasetNo
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. 'Update' implies mutation but lacks disclosure of side effects, validation behavior, or whether changes are persistent. Fails to mention the existence of an output schema or what constitutes a successful operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence format is efficient, but half the content ('with the normalized public contract') adds no actionable value for an AI agent. Front-loaded with the action but carries conceptual cruft.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has 4 parameters with deep nesting and an output schema. The description is grossly inadequate for this complexity, failing to explain parameter relationships (e.g., how sort interacts with dataset schema) or session management requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate for complex nested parameters (sort, dataset, sessionSnapshot). The description mentions 'sort settings' but provides no semantic guidance for the required 'sort' object properties (by, order, metricAgg, axisSlotId, etc.) or the other three parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

States 'Update Bar chart sort settings' which identifies the verb and target resource, but the phrase 'with the normalized public contract' is opaque implementation jargon that obscures rather than clarifies. Does not differentiate from sibling 'set_' tools (e.g., set_bar_filters, set_bar_mapping) beyond mentioning 'sort'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like set_bar_mapping or update_bar_style. No mention of prerequisites (e.g., requiring an active session via sessionId) or typical usage patterns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_bar_styleUpdate Bar StyleCInspect

Apply a normalized Bar style patch to the current session.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleYes
datasetNo
sessionIdNo
sessionSnapshotNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
authNo
errorNo
quotaNo
errorsNo
statusNo
visualNo
messageNo
approvalNo
warningsNo
canRenderNo
operationNo
sessionIdNo
datasetSchemaNo
loginRequiredNo
controlSurfaceNo
guestSessionIdNo
sessionSnapshotNo
upgradeRequiredNo
legibilityWarningsNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. The word 'patch' implies a partial update behavior, but the description fails to clarify whether this merges with existing styles, replaces them entirely, or requires specific session states. No mention of validation failures or atomicity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single 9-word sentence is efficiently structured without redundancy, but its extreme brevity constitutes under-specification for a tool of this complexity rather than purposeful conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (per context signals), the description need not explain return values. However, for a tool accepting complex nested objects (sessionSnapshot, dataset) with 50+ style properties, the description is woefully insufficient to guide correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage across 4 complex parameters (including deeply nested style objects with 50+ properties, and large objects like sessionSnapshot), the description completely fails to compensate. It mentions 'style patch' implying the style parameter, but provides no semantic guidance for dataset, sessionId, or sessionSnapshot structures.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific action ('Apply... patch') targeting 'Bar style' in the 'current session', distinguishing it from siblings like set_bar_mapping or set_bar_orientation which handle data configuration rather than visual styling. However, the term 'normalized' is jargon without context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like set_bar_orientation or configure_bar_growth, nor any mention of prerequisites (e.g., requiring an existing session) or side effects.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources