Skip to main content
Glama

Toofi Dental Planning MCP

Server Details

Agent-native dental planning MCP for plan drafts, presentations, and price estimates.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 24 of 24 tools scored. Lowest: 2.7/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation or resource (e.g., checkout, discovery, generation, querying demo data, billing). No two tools have overlapping purposes; even similar-looking tools like get_demo_plan vs get_plan are clearly differentiated by scope (demo vs real).

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using lowercase and underscores. Verbs like create, discover, generate, get, import, list, lookup, preview, start are applied uniformly, making the surface predictable.

Tool Count4/5

With 24 tools, the count is slightly above the ideal 3-15 range but still reasonable given the breadth of dental planning workflows (billing, demo data, planning, presentation, audit, etc.). The tools are well-organized and cover distinct sub-domains.

Completeness3/5

The tool set covers creation (generate), reading (get/list), and some workflow initiation, but lacks explicit update or delete operations for patients, plans, or other resources. This gap may force agents to rely on generation re-runs instead of incremental updates, limiting lifecycle management.

Available Tools

32 tools
claim_agent_checkout_keyClaim paid agent API keyA
Read-onlyIdempotent
Inspect

After Stripe Checkout is paid, exchange the purchase id and one-time claim token for the Toofi agent API key used as X-Toofi-Agent-Key or agent_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
claim_tokenYesOne-time claim token returned by create_agent_checkout_session.
purchase_idYesPurchase id returned by create_agent_checkout_session.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, covering core behavioral properties. The description adds context about the key's usage but does not disclose additional traits such as token consumption, authentication requirements, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence of 23 words conveys the entire purpose and usage context. It is front-loaded with the trigger condition and provides the key output. Every word contributes value, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description does not need to explain return values. It covers the essential workflow steps. However, it omits error conditions or post-claim behavior, which would be helpful for a complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description repeats the parameter names purchase_id and claim_token but adds no new meaning beyond what the schema provides. No parameter-specific elaboration is given.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: exchanging purchase_id and claim_token for an API key after Stripe Checkout is paid. It specifies the key's usage (X-Toofi-Agent-Key or agent_key), distinguishing it from the sibling tool create_agent_checkout_session that creates the session.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly indicates when to use this tool ('After Stripe Checkout is paid'), providing clear context. It does not list when-not-to-use or alternatives, but the sibling tool create_agent_checkout_session is implied as the antecedent step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_agent_checkout_sessionCreate agent checkout sessionC
Read-onlyIdempotent
Inspect

Create a real Stripe Checkout session for buying Toofi internal credits. Returns a claim token so the agent can retrieve its key after payment.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailNoAlias for owner_email.
unitsNo
agent_idNoCalling agent identifier.
quote_idNoQuote id from get_agent_billing_quote.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
tool_nameNoTool name if quote_id is not supplied.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
product_idNoOptional Toofi credit-pack product id. Defaults to the public agent credit pack.
request_idNoIdempotency and correlation id echoed in Toofi responses.
return_urlNoAgent/client return URL after checkout.
owner_emailNoEmail used for receipt and backup delivery after Stripe payment.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description claims the tool creates a checkout contract and unlocks paid workflows, implying a state-mutating operation. However, annotations set 'readOnlyHint': true and 'destructiveHint': false, contradicting the described behavior. This is a serious inconsistency that misleads the agent about the tool's side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, front-loaded with the core action and purpose. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema, the description omits critical context such as the requirement for a quote_id, the checkout flow (e.g., redirect to Stripe), idempotency, and the meaning of 'unlocking paid workflows.' The annotation contradiction also leaves the agent uncertain about whether the tool is safe to use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 90% of parameters with descriptions, so the description adds no additional parameter meaning beyond 'for buying internal credits and unlocking paid MCP workflows.' This meets the baseline expectation given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action as 'create a checkout contract' for buying internal credits and unlocking paid workflows. It distinguishes from sibling billing tools (e.g., get_agent_billing_quote) by implying a write operation. However, the annotation contradiction ('readOnlyHint': true) undermines confidence in the stated purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like get_agent_billing_quote. The description does not mention prerequisites (e.g., requiring a quote_id) or scenarios where this tool or others should be preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_capabilitiesDiscover Toofi capabilitiesA
Read-onlyIdempotent
Inspect

Discover Toofi agent-native dental planning capabilities, links, safety boundary, and billing rails.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and non-destructive. Description adds context about what is discovered (safety boundary, billing rails) beyond annotations, without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence covering key aspects, no unnecessary words. Could be slightly more structured but is concise enough.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given rich schema descriptions, annotations, and presence of output schema, the description is complete enough for understanding tool purpose. Could mention output nature but not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions. Description does not add any parameter-specific meaning, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool discovers specific Toofi capabilities, links, safety boundary, and billing rails, distinguishing it from sibling tools that perform specific actions like creating, generating, or getting data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or alternatives guidance. However, the tool's name and description imply it should be used first to understand available capabilities, but this is not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

example_enExample PDF result - EnglishA
Read-onlyIdempotent
Inspect

Return the cached English Toofi example PDF artifact. Same as example-en for human docs; MCP-safe tool name uses underscore.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds that the tool returns a static PDF artifact, providing content context beyond annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no wasted words. Structure is effective for conveying core information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a static example tool with existing output schema and annotations, the description covers key aspects. Could mention that it's for testing/demo purposes but otherwise sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described. The description adds no additional meaning about parameters, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a static English PDF artifact, specifying the verb 'Return' and the resource 'static English Toofi example PDF artifact'. It distinguishes from sibling language-specific tools by identifying 'English'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions the naming convention but does not explicitly state when to use this tool over other example tools or provide exclusions. Usage context is implied by the resource name but lacks direct guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

example_plExample PDF result - PolishA
Read-onlyIdempotent
Inspect

Return the cached Polish Toofi example PDF artifact.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the safety profile is clear. The description adds that it returns a static PDF, but no additional behavioral traits (e.g., file size, caching).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, clear, front-loaded sentence with no waste. Perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple static artifact return with existing output schema and annotations, the description is mostly complete. Could mention it's a demo/test artifact.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all 6 parameters with descriptions (100% coverage), so the description adds no parameter-specific meaning. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Return' and the specific resource 'static Polish Toofi example PDF artifact', distinguishing it from sibling examples like example_en, example_ru, etc., by language.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., other language examples or the general get_example_result). The description lacks context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

example_ruExample PDF result - RussianA
Read-onlyIdempotent
Inspect

Return the cached Russian Toofi example PDF artifact. Same as example-ru for human docs; MCP-safe tool name uses underscore.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly=true, idempotent=true, destructive=false. The description confirms it returns a static artifact, which adds marginal context but does not disclose any additional behavioral traits beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences. The first sentence immediately states the core purpose, and the second provides sibling differentiation. Every sentence adds value with no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of a detailed input schema, full annotations, and an output schema, the description is sufficiently complete. It clearly indicates the static nature of the artifact, though it could explicitly mention the PDF format, which is implied by the tool name.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 6 parameters have descriptions in the input schema (100% coverage). The description does not add any parameter-specific meaning beyond what the schema already provides, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a static Russian Toofi example PDF artifact. It distinguishes from sibling example tools by indicating it is the Russian version and notes the naming convention (underscore for MCP-safe).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the other example tools (example_en, example_pl, etc.). The description does not mention any specific context or conditions for its use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

example_skExample PDF result - SlovakA
Read-onlyIdempotent
Inspect

Return the cached Slovak Toofi example PDF artifact.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds 'static' and 'PDF artifact,' reinforcing non-destructive behavior and output type, which is helpful beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with clear verb-object structure, no redundant information, and front-loaded with the action and key detail (Slovak).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a static demo tool with full annotations and output schema, the description is adequate. It could optionally mention demo purpose, but not required given the tool name and sibling context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described. The description adds no parameter-level information beyond what the schema provides, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a 'static Slovak Toofi example PDF artifact,' specifying both the resource type (PDF artifact) and locale (Slovak), which distinguishes it from sibling tools like example_en, example_pl, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus other language examples or get_example_result. The context is implied by the 'Slovak' identifier, but alternatives are not mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

example_uaExample PDF result - Ukrainian aliasA
Read-onlyIdempotent
Inspect

Return the cached Ukrainian Toofi example PDF artifact using the ua alias requested by agent integrators.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. Description adds that the artifact is 'static' and a 'PDF', confirming non-destructive, idempotent behavior. Adds value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no wasted words. Clearly front-loaded with the action and result. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple static artifact return, the description is complete. It specifies the language variant and output type. Output schema exists to detail the return value, so no need for further explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description does not add any parameter-specific information, but also does not repeat schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns a static Ukrainian Toofi example PDF artifact using the 'ua' alias. This verb+resource combination distinguishes it from sibling example_* tools that return other language versions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for Ukrainian alias, but does not explicitly state when to use this tool over alternatives like example_uk. Lacks explicit when-not or alternative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

example_ukExample PDF result - UkrainianA
Read-onlyIdempotent
Inspect

Return the cached Ukrainian Toofi example PDF artifact.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true and destructiveHint=false. Description adds that the artifact is 'static', confirming no side effects, but does not disclose return format (e.g., PDF binary, URL) beyond stating it is an 'artifact'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no extraneous words, immediately conveys the tool's action and output. Title and description together are front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple static artifact retrieval with an output schema present, the description is largely sufficient. Minor gap: does not specify that this is a demo/example tool, but context from sibling names mitigates this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all 6 parameters described; description adds no parameter-level information, which is acceptable under high coverage. Baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Return' and resource 'static Ukrainian Toofi example PDF artifact', distinguishing it from sibling example tools for other locales.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, such as the other language example tools or the non-static example tools. Usage context is only implied by the locale in the title.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_dental_treatment_plan_pdfGenerate dental treatment-plan PDFA
Idempotent
Inspect

Start the real Toofi headless treatment-plan pipeline from structured clinical findings: create a service-user runtime plan, invoke AI plan generation, and return an operation id for polling toward a patient-facing PDF output.

ParametersJSON Schema
NameRequiredDescriptionDefault
patientNo
agent_idNoCalling agent identifier.
agent_keyNoToofi agent API key. May also be supplied as X-Toofi-Agent-Key header.
ai_presetNo
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
presentationNo
principal_idNoHuman or clinic principal on whose behalf the agent acts.
session_tokenNoAlias for runtime_session_token.
clinical_inputYes
toofi_agent_keyNoAlias for agent_key.
runtime_session_tokenNoToofi runtime actor token for service-user scoped execution. May also be supplied as x-toofi-session-token header.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description reveals that the tool creates a runtime plan, invokes AI generation, and returns an operation ID for polling – important async behavior not captured by annotations (which only indicate idempotency and non-destructiveness). It adds context about the pipeline stages and output format, though does not describe potential side effects like cost or persistence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that immediately communicates the action, resource, process, and output. No redundant words; front-loaded with the key action 'Start'. Every part of the sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 12 parameters and nested objects, the description, combined with annotations and an output schema (mentioned in context signals), covers the essential workflow: input, process, output (operation ID). It does not explain prerequisites or failure modes, but the async polling pattern is made clear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67%, and the description provides no additional parameter explanations beyond what the schema already includes. It does not clarify relationships or common use cases for the many nested objects (e.g., clinical_input, patient). The description adds marginal value over the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Start' and clearly names the resource 'Toofi headless treatment-plan pipeline' and output 'operation id for polling toward a patient-facing PDF'. It distinguishes from siblings like generate_treatment_plan_draft by emphasizing the full pipeline and PDF output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description states the tool is for initiating the full pipeline from structured clinical findings, but does not explicitly contrast with siblings like generate_treatment_plan_draft or generate_patient_presentation. No when-not-to-use or alternative guidance is provided, leaving the agent to infer from purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_patient_presentationGenerate patient presentationA
Idempotent
Inspect

Generate a patient-facing Toofi presentation from a treatment plan with signed audit and C2PA-provenanced output surface.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for patient presentation generation.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for patient presentation generation.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for patient presentation generation.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds behavioral context beyond annotations by mentioning 'signed audit and C2PA-provenanced output surface', indicating provenance and integrity requirements. Annotations already provide idempotentHint=true and destructiveHint=false, so the description complements rather than repeats, offering specific output characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose and key constraints. Every word adds value, with no redundancy. It is appropriately concise for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has nine parameters and many siblings, the description is minimal. It does not explicitly state prerequisites (e.g., the plan must be signed) or workflow context. However, the presence of an output schema reduces the need to describe return values, so it is adequate but could be more thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema description coverage is 100%, the description enriches meaning by specifying that the treatment plan must have a signed audit and that the output is C2PA-provenanced. This adds semantic value beyond individual parameter descriptions, guiding the agent on required plan properties.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Generate') and resource ('patient-facing Toofi presentation'), and includes key constraints ('signed audit and C2PA-provenanced output surface'). This clearly distinguishes it from sibling tools like 'generate_price_estimate' or 'generate_treatment_plan_draft', which focus on different outputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. While it implies it is used for generating presentations from treatment plans, it lacks guidance on prerequisites or exclusions. With nine parameters and many siblings, some usage context would improve agent selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_price_estimateGenerate price estimateA
Idempotent
Inspect

Generate a Toofi price estimate from treatment-plan and clinic pricing inputs.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for price estimate generation.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for price estimate generation.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for price estimate generation.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key hints (idempotentHint, non-readOnly, non-destructive). Description adds context about inputs but no further behavioral details like side effects or authorization needs. Adequate but not outstanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 12 words, no fluff. Front-loaded with action and object. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 parameters (all optional) and an output schema, the description captures the essence. Might benefit from mentioning that at least one of plan_id/plan_ref is needed, but overall sufficient for an agent to understand the tool's role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents each parameter. The description mentions 'treatment-plan and clinic pricing inputs' which maps partially to schema properties but adds no new semantic nuance beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action ('Generate'), the resource ('Toofi price estimate'), and the inputs ('from treatment-plan and clinic pricing inputs'). It distinguishes from siblings like 'generate_treatment_plan_draft' (plan vs. estimate) and 'get_agent_billing_quote' (different quote type).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance. The description implies usage for generating estimates when plan and clinic pricing are available, but it doesn't mention alternatives or prerequisites. Minimal guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_treatment_plan_draftGenerate treatment-plan draftA
Idempotent
Inspect

Generate a structured no-memory Toofi treatment-plan draft with visits, estimate, presentation outline, billing metadata, audit receipt shape, and dentist approval boundary.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoNatural-language clinical request for a treatment-plan draft.
patientNo
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
proceduresNo
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoAgent-scoped patient reference.
principal_idNoHuman or clinic principal on whose behalf the agent acts.
chief_complaintNoPatient chief complaint, no PHI required in public demo mode.
clinical_requestNoStructured or natural-language clinical request.
clinical_findingsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotent and non-destructive behavior. The description adds value by stating 'no-memory' (stateless) and listing output components, including 'dentist approval boundary' which hints at approval flow. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that immediately conveys the action and key outputs. No redundant phrases; every word contributes to understanding. Excellent front-loading of verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 13 parameters and nested objects, the description covers the output scope well. Combined with annotations and output schema, the essential context is present. Could be improved by mentioning prerequisite inputs (e.g., patient data) but not required for basic usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 77%, so most parameters are documented in the schema. The description does not add parameter-level meaning beyond listing output components. Baseline of 3 is appropriate as the schema carries the primary burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'generate' and the resource 'treatment-plan draft', listing specific components (visits, estimate, presentation outline, billing metadata, audit receipt shape, dentist approval boundary). This distinguishes it from sibling tools like 'get_plan' (read) or 'generate_price_estimate' (narrower scope).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for creating a new treatment plan draft but provides no explicit guidance on when to use this tool versus alternatives (e.g., 'preview_plan_draft_schema'), nor does it state when not to use it. Context like requiring patient and clinical data is only implicit from parameter names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_billing_quoteGet agent billing quoteA
Read-onlyIdempotent
Inspect

Get a deterministic Toofi internal-credit quote for an agent tool call, including TTL, payment rails, Stripe checkout command, and x402-ready contract fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsNo
agent_idNoCalling agent identifier.
currencyNoTOOFI_CREDIT
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
tool_nameNoMCP tool or Toofi command being priced.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds 'deterministic' and lists return fields (TTL, payment rails, etc.), providing useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 22 words, front-loads purpose with no redundant information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has an output schema, so return values are likely documented there. The description mentions included fields and is sufficient for a read-only query tool. Lacks explanation of prerequisites but annotations cover safety.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 78%, so most parameters have descriptions in the schema. The description adds no extra parameter information. Given high coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'Toofi internal-credit quote for an agent tool call', listing included fields like TTL, payment rails, etc. It is specific but does not explicitly differentiate from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like create_agent_checkout_session or get_agent_credit_balance. The description simply states what it does without context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_credit_balanceGet agent credit balanceA
Read-onlyIdempotent
Inspect

Get Toofi internal credit balance for an agent or clinic mandate.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
agent_keyNoToofi agent API key. May also be supplied as X-Toofi-Agent-Key header.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.
session_tokenNoAlias for runtime_session_token.
toofi_agent_keyNoAlias for agent_key.
runtime_session_tokenNoToofi runtime actor token for service-user scoped execution. May also be supplied as x-toofi-session-token header.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false. Description adds no behavioral traits beyond mentioning the data source ('Toofi internal credit balance'), which is consistent with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, 11 words, no redundancy. Purpose is front-loaded and efficiently communicated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 6 parameters, no required fields, and an output schema, the description is sufficient for the tool's read-only function but omits usage context relative to similar billing tools. Annotations supply safety profile but not completeness of use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for all 6 parameters. The description does not add meaning beyond the schema, but the baseline is 3 due to high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get Toofi internal credit balance' with scope 'for an agent or clinic mandate', using a specific verb and resource. It distinguishes itself from siblings which focus on patients, plans, or pricing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like get_agent_billing_quote. The description lacks context for preferred usage scenarios or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_demo_patientGet demo patientA
Read-onlyIdempotent
Inspect

Get one public no-PHI Toofi demo patient.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
patient_idNoToofi demo patient_id. Defaults to the primary demo record.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds behavioral context by stating 'public no-PHI', confirming the tool deals with safe, non-sensitive data. This goes beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence that is concise and front-loaded with key information. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown) and comprehensive annotations, the description is adequate. It succinctly identifies the tool's purpose and nature, though it could briefly mention the return type or scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 7 parameters have descriptions in the schema (100% coverage). The description adds minimal extra meaning, only noting that patient_id 'Defaults to the primary demo record.' Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get one public no-PHI Toofi demo patient.' clearly states the verb (Get), resource (demo patient), and key attributes (public, no-PHI). It distinguishes from siblings like 'get_patient' (real PHI) and 'list_demo_patients' (list).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly specify when to use this tool versus alternatives like 'get_patient' or 'list_demo_patients'. The context of sibling tools implies usage for demo purposes, but no direct guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_demo_planGet demo planB
Read-onlyIdempotent
Inspect

Get one public no-PHI Toofi demo treatment plan with estimate and dentist approval boundary.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoToofi demo plan_id. Defaults to the primary demo record.
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, and destructiveHint, so the description adds minimal value by noting the plan is public and no-PHI. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single clear sentence with no redundancy, though it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the comprehensive annotations, output schema, and 100% parameter coverage, the description is adequate but lacks usage context and more detail on return value composition.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter documented in the schema. The tool description does not add additional meaning beyond the schema, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a public, no-PHI demo treatment plan with estimate and dentist approval boundary, distinguishing it from sibling tools like get_demo_patient and list_demo_plans.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_demo_patient or list_demo_plans, nor does it specify any prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_demo_presentationGet demo presentationA
Read-onlyIdempotent
Inspect

Get one public no-PHI Toofi demo patient presentation shape with provenance surface fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoToofi demo plan_id. Defaults to the primary demo record.
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds useful context about the data: it is a demo, public, contains no PHI, and includes provenance surface fields. This goes beyond the annotations to inform the agent about data sensitivity and content.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 12 words, front-loading the verb and object with precise modifiers. Every word contributes meaning with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description does not need to explain return values. However, it lacks guidance on how the many optional parameters should be used in context. The description is adequate for a simple read tool with full schema coverage but could better connect parameters to usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with each parameter having a clear description. The tool description does not add meaning beyond the schema, which is acceptable as the schema is sufficient. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and precisely defines the resource as a 'public no-PHI Toofi demo patient presentation shape with provenance surface fields'. It distinguishes from sibling tools like generate_patient_presentation and get_demo_patient by specifying it's a read operation on a demo presentation with no PHI and provenance fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage within the demo domain (public, no-PHI) but provides no explicit guidance on when to use this tool versus alternatives like generate_patient_presentation or get_demo_patient. It does not state when not to use it or mention prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_dental_plan_operation_statusGet dental plan operation statusA
Read-onlyIdempotent
Inspect

Poll a Toofi headless treatment-plan operation until AI planning is completed and ready for presentation/PDF delivery.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idYesToofi plan id returned by generate_dental_treatment_plan_pdf.
agent_idNoCalling agent identifier.
agent_keyNoToofi agent API key. May also be supplied as X-Toofi-Agent-Key header.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
patient_idNoPatient id returned by generate_dental_treatment_plan_pdf.
request_idNoIdempotency and correlation id echoed in Toofi responses.
include_pdfNoWhen true, completed operations return a signed presentation PDF URL.
operation_idYesOperation id returned by generate_dental_treatment_plan_pdf.
presentationNo
principal_idNoHuman or clinic principal on whose behalf the agent acts.
session_tokenNoAlias for runtime_session_token.
toofi_agent_keyNoAlias for agent_key.
runtime_session_tokenNoToofi runtime actor token for service-user scoped execution. May also be supplied as x-toofi-session-token header.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark it as read-only, idempotent, and non-destructive. The description adds that it polls until completion, which is a key behavioral trait. However, it does not mention timeout or error scenarios, but overall it does not contradict annotations and adds value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise, front-loaded with the purpose, and contains no extraneous words. It efficiently communicates the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (12 parameters, output schema, annotations), the description is adequate but minimal. It covers the core purpose but does not explain return values (though output schema exists) or failure conditions. Still, it is reasonably complete for a polling tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% parameter description coverage, so the description does not need to add parameter details. It does not provide additional meaning beyond the schema, which is acceptable but not exceptional. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool polls a specific operation until AI planning is completed, using a specific verb 'Poll' and resource 'Toofi headless treatment-plan operation'. It effectively distinguishes itself from sibling tools like get_plan or get_status by focusing on waiting for completion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use after starting an operation (e.g., via generate_dental_treatment_plan_pdf) but does not explicitly state when to use this tool versus alternatives like get_status or get_plan. No exclusion criteria or alternative suggestions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_example_resultGet example PDF resultA
Read-onlyIdempotent
Inspect

Return a stable public PDF artifact that shows what Toofi produces: a patient-facing treatment-plan presentation for an example case where tooth 11 and root 11 are missing. No generation is run at call time.

ParametersJSON Schema
NameRequiredDescriptionDefault
localeNoAlias for language.
agent_idNoCalling agent identifier.
languageNoExample PDF language: en, pl, ru, sk, uk, or ua.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and destructiveHint=false. The description adds that no generation occurs at call time, reinforcing the read-only, idempotent nature. It does not contradict annotations and provides additional context about stability and publicity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no superfluous words. It front-loads the core purpose and adds a crucial behavioral note. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the output schema exists and annotations cover behavioral context, the description does not explain the role of the many optional parameters or whether they influence the returned PDF (e.g., language). This leaves a gap for an agent to understand parameter usage fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no parameter-specific information beyond what the schema already provides; it does not clarify how parameters affect the output or whether they are required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns a stable public PDF artifact for an example case, explicitly differentiating from real generation. The phrase 'No generation is run at call time' further clarifies the static nature, distinguishing it from sibling tools like generate_dental_treatment_plan_pdf.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for demonstration or preview purposes, and the sibling context reinforces this. However, it does not explicitly state when to avoid this tool or provide direct alternatives, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_patientGet clinic patientB
Read-onlyIdempotent
Inspect

Get one clinic patient through Toofi agent-native mandate rails.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for patient retrieval.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for patient retrieval.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for patient retrieval.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds only the vague concept of 'mandate rails', which hints at authorization but lacks concrete details. Overall, behavioral transparency is adequate given the strong annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, making it concise. The main action 'Get one clinic patient' is front-loaded. However, the appended phrase 'through Toofi agent-native mandate rails' could be clearer or restructured for better understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 optional parameters and an output schema, the description is minimally adequate but lacks context about the mandate mechanism and when this retrieval is appropriate compared to sibling tools. It does not explain the significance of the parameters introduced by the mandate rails.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% parameter description coverage, so each parameter is already documented. The description does not add parameter semantics. Baseline of 3 is appropriate since the schema handles the detailed parameter meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get one clinic patient', which is a specific verb and resource. However, it does not explicitly differentiate from sibling tools like list_patients or get_demo_patient, and the phrase 'through Toofi agent-native mandate rails' adds jargon without clarifying uniqueness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as list_patients or get_demo_patient. It does not mention required context, preconditions, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_planGet treatment planB
Read-onlyIdempotent
Inspect

Get one Toofi treatment plan through agent-native mandate rails.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for plan retrieval.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for plan retrieval.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for plan retrieval.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds the concept of 'mandate rails' but does not explain its implications beyond what annotations convey. It provides marginal additional behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that is concise and front-loaded with the verb and resource. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 9 optional parameters and no clarification on their interplay (e.g., plan_id vs plan_ref), the description is insufficient for an agent to confidently invoke the tool. The presence of an output schema does not compensate for the lack of usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so baseline is 3. The description does not add any parameter-specific meaning beyond the schema descriptions, offering no clarification on parameter relationships or required inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get one Toofi treatment plan' specifying the verb and resource. However, it fails to differentiate from sibling tools like get_demo_plan or list_plans, leaving ambiguity about which tool to use for plan retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., get_demo_plan for demo plans, list_plans for multiple plans). The description lacks any usage context or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_statusGet workflow statusB
Read-onlyIdempotent
Inspect

Get Toofi agent workflow status.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for workflow status.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for workflow status.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for workflow status.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, and not destructive. The description adds minimal behavioral context beyond restating the tool's purpose, but does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded with the action verb. Every word is necessary and there is no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high schema coverage, presence of output schema, and comprehensive annotations, the minimal description is sufficient for an agent to understand the tool's basic function. It does not explain the meaning of 'workflow status', but the output schema likely covers that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description does not need to elaborate on parameters. The description adds no parameter information, which is acceptable given the schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'Toofi agent workflow status', making the tool's function unambiguous. However, it does not explicitly differentiate from sibling 'get' tools like get_plan or get_patient, though the resource specificity helps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidance is provided; the description offers no information on when to use this tool versus other get tools or alternative approaches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

import_price_csvImport clinic price CSVA
Idempotent
Inspect

Import and map a clinic price CSV into Toofi pricing configuration.

ParametersJSON Schema
NameRequiredDescriptionDefault
dry_runNo
agent_idNoCalling agent identifier.
csv_textNoCSV text payload.
file_urlNoURL to a CSV file.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate idempotent, non-destructive, and open-world behavior. The description adds little beyond the action phrase; it does not disclose error handling, overwrite semantics, or mapping process details. With annotations present, the bar is lower, but the description remains minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with zero wasted words. Every element ('Import and map', 'clinic price CSV', 'Toofi pricing configuration') adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While an output schema exists and schema coverage is high, the description does not address important contextual aspects for an import tool, such as data validation, error handling, or how mapping works. It is adequate for a parameter-rich tool but lacks completeness in explaining the overall operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is high (89%), so the schema already documents parameters well. The tool description does not add any additional meaning or context for the parameters beyond what is in the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a concrete action ('Import and map') and a clear resource ('clinic price CSV into Toofi pricing configuration'), distinguishing it from sibling tools that deal with patients, plans, or billing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is used for importing price CSV data, but does not provide explicit guidance on when to use it versus alternatives like generate_price_estimate or list_plans. No exclusions or when-not scenarios are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_audit_receiptsList signed audit receiptsB
Read-onlyIdempotent
Inspect

List signed Toofi agent invocation receipts and provenance records.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for audit receipt listing.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for audit receipt listing.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for audit receipt listing.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds no further behavioral context such as pagination, filtering behavior, or response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb, no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 9 optional parameters and an output schema, the description does not explain how parameters filter results or typical use cases, leaving the agent underinformed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so the description adds no extra meaning beyond parameter names and brief descriptions already present.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists 'signed Toofi agent invocation receipts and provenance records,' which is specific and distinct from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives, no mention of prerequisites or context where it should be avoided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_demo_patientsList demo patientsA
Read-onlyIdempotent
Inspect

List public no-PHI Toofi demo patients so agents can inspect patient response structure.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the description adds the context that this lists public no-PHI demo patients. There is no contradiction. The description does not add significant behavioral detail beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that immediately communicates the tool's purpose. No unnecessary words, perfect conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, comprehensive annotations, and full parameter coverage, the description is complete enough. It captures the essential purpose and context without missing important information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described. The description does not add any parameter-specific meaning beyond the schema, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List', the resource 'public no-PHI Toofi demo patients', and the purpose 'inspect patient response structure'. It distinguishes from sibling tools like 'list_patients' (real patients) and 'get_demo_patient' (single patient).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for demo/exploration but does not explicitly state when to use this tool versus alternatives like 'list_patients' or 'get_demo_patient'. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_demo_plansList demo plansB
Read-onlyIdempotent
Inspect

List public no-PHI Toofi demo treatment plans.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds 'public no-PHI' context but no additional behavioral traits such as pagination or scope limitations beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It is concise but could be slightly expanded for completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and rich annotations, the description is adequate but lacks guidance on filtering or sibling differentiation, making it minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes all parameters. The description adds no extra meaning beyond the schema, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists public no-PHI Toofi demo treatment plans, specifying the resource and scope. It distinguishes from siblings like list_plans by focusing on demo and no-PHI.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives such as list_plans or get_demo_plan. The description does not provide usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_patientsList clinic patientsB
Read-onlyIdempotent
Inspect

List clinic patients through Toofi agent-native mandate rails.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for patient listing.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for patient listing.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for patient listing.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, destructiveHint=false, providing a strong behavioral baseline. The description adds 'through Toofi agent-native mandate rails', hinting at authorization context, but does not disclose specifics like rate limits or response pagination.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence front-loads the action and resource. While concise, it could be more informative without being verbose. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema and rich annotations, the description is insufficient for a tool with 9 parameters and no required fields. It fails to explain the mandate concept, parameter interplay, or how to construct a valid request. Completeness is lacking.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage; all 9 parameters are documented. The description adds no additional meaning beyond what the schema provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'List' and resource 'clinic patients'. It distinguishes from siblings like 'get_patient' (single retrieval) and 'list_demo_patients' (demo scope), making the tool's role clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'get_patient' or 'list_demo_patients'. The description does not outline prerequisites or typical usage scenarios, leaving the agent without decision support.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_plansList treatment plansB
Read-onlyIdempotent
Inspect

List Toofi treatment plans through agent-native mandate rails.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for plan listing.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for plan listing.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for plan listing.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and non-destructive behavior. The description adds the 'mandate rails' context, but does not elaborate on behavior such as pagination, result limits, or what happens when no parameters are provided. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, very concise. However, the phrase 'through agent-native mandate rails' is jargon and may confuse agents, slightly reducing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 optional parameters and an output schema, the description does not explain how parameters interact, filtering behavior, or the role of 'mandate' in listing. More context would help agents use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. The description does not add extra meaning or clarify relationships between parameters (e.g., plan_id vs plan_ref) beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list' and the resource 'treatment plans', and distinguishes from siblings by mentioning 'Toofi' and 'agent-native mandate rails'. However, it could be more specific about the scope (e.g., patient-specific vs. clinic-wide plans) to align with sibling list tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool instead of alternatives like 'get_plan' or 'list_demo_plans'. No prerequisites or context for using the mandate rails mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_dental_proceduresLook up dental proceduresA
Read-onlyIdempotent
Inspect

Map a natural-language dental procedure query to structured Toofi procedure catalog entries and pricing anchors. No PHI.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoAlias for query.
textNoAlias for query.
limitNoMaximum number of procedure matches.
queryNoNatural-language dental procedure query, for example "implant and crown for lower molar".
agent_idNoCalling agent identifier.
languageNoPreferred language hint.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds 'No PHI' as extra transparency, but lacks details on other behavioral aspects like rate limits or query ambiguity handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two short sentences that convey the core purpose and a key constraint. Every word adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and full parameter coverage, the description adequately explains the tool's core function. It could mention typical usage scenarios or edge cases, but overall it is sufficiently complete for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema; it does not elaborate on how parameters like 'query' or 'limit' affect behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool maps natural-language dental procedure queries to structured catalog entries and pricing anchors. It specifies input type and output, but does not explicitly differentiate from sibling tools like generate_price_estimate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for natural-language procedure lookups with the 'No PHI' constraint, but provides no explicit guidance on when to use this tool versus alternatives or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

preview_plan_draft_schemaPreview plan draft schemaA
Read-onlyIdempotent
Inspect

Preview the structured Toofi treatment-plan draft response schema for agent-native clinical workflows.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoCalling agent identifier.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
proceduresNo
request_idNoIdempotency and correlation id echoed in Toofi responses.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, which fully covers the behavioral profile. The description does not add additional behavioral context (e.g., side effects, state changes) nor contradict the annotations, but also does not expand on them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence of 13 words efficiently conveys the tool's purpose. It is front-loaded with the core action and resource, with no filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (previewing a schema) and the presence of an output schema and comprehensive annotations, the description is adequately complete. It could briefly mention that no data is modified, but annotations already cover that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 86%, so most parameters are already explained in the input schema. The description does not add any parameter-specific context, so it meets but does not exceed the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a clear action ('preview') and resource ('structured Toofi treatment-plan draft response schema'), and implies a distinct use case from siblings like generate_treatment_plan_draft or get_plan. The verb+resource combination is unique and informative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for previewing schema structure, but lacks explicit guidance on when to use it versus alternatives, or when not to use it. No exclusions or sibling comparisons are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_pano_markupStart panoramic X-ray markupC
Idempotent
Inspect

Start Toofi panoramic X-ray markup workflow under agent-native clinical planning rails.

ParametersJSON Schema
NameRequiredDescriptionDefault
plan_idNoPlan id for panoramic X-ray markup.
agent_idNoCalling agent identifier.
plan_refNoPlan reference for panoramic X-ray markup.
clinic_idNoClinic identifier for mandate-scoped production execution.
intent_idNoRoot agent intent id.
mandate_idNoClinic or agent mandate id. Optional in public demo mode.
request_idNoIdempotency and correlation id echoed in Toofi responses.
patient_refNoPatient reference for panoramic X-ray markup.
principal_idNoHuman or clinic principal on whose behalf the agent acts.

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNo
modeNo
statusNo
endpointNo
timestampNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate non-readonly, idempotent, open-world, and non-destructive behavior. The description adds minimal behavioral context beyond 'starts a workflow,' which is vague. It does not explain what the workflow entails (e.g., whether it creates a record, triggers processing, or requires follow-up steps).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence is concise, but it lacks structure and important details. The jargon-heavy phrase reduces clarity, making the sentence less effective than it could be with plain language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema and many parameters, the description does not explain the return value, the workflow lifecycle, or how this tool fits into the broader clinical planning process. More context is needed for proper agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all 9 parameters, so baseline is 3. The description adds no extra semantics beyond the schema itself; parameter descriptions are adequate but not enriched by the tool description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('start') and resource ('panoramic X-ray markup workflow'), which clearly indicates the tool's action and domain. However, the phrase 'under agent-native clinical planning rails' is jargon that may obscure the purpose for some agents. It does not explicitly distinguish from sibling tools like generate_treatment_plan_draft, which could be the next step after markup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as generate_treatment_plan_draft or create_agent_checkout_session. No preconditions or context for invocation are provided, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources