Chia Health MCP Server
Server Details
Licensed US telehealth — GLP-1 medications, intake, consents, Stripe ACP. HIPAA-compliant, 30 tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- chia-health/chia-mcp
- GitHub Stars
- 5
- Server Listing
- Chia Health MCP
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
34 toolsauth.check_paymentCheck Payment StatusARead-onlyInspect
Check if the patient has completed payment and upgrade the token to full scope. Call this after sharing the checkout payment link with the patient.
Poll every 10-15 seconds. When payment is detected, the token is automatically upgraded to full scope, unlocking portal tools (care plan, refills, messaging, weight logs, etc.).
Requires the guest token from auth.verify_otp as bearer_token.
| Name | Required | Description | Default |
|---|---|---|---|
| bearer_token | No | Guest token from auth.verify_otp |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description contradicts the readOnlyHint: true annotation by stating the tool 'upgrade[s] the token to full scope,' which is a state-modifying mutation. A read-only tool should not perform token escalation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences covering purpose, polling behavior, and parameter requirements. No redundant information; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of the payment verification workflow including polling strategy and downstream effects (unlocking portal tools). However, the contradiction regarding the mutation aspect creates confusion about the tool's actual behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds valuable context by specifying the bearer_token comes from auth.verify_otp, helping the agent understand the authentication flow dependency.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks payment completion and upgrades the token to full scope. It effectively distinguishes this from sibling checkout tools (which create payment links) and portal tools (which are unlocked after this step).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent guidance: explicitly states to call 'after sharing the checkout payment link,' specifies polling frequency (10-15 seconds), and identifies the prerequisite bearer token source (auth.verify_otp).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
auth.resend_otpResend OTP CodeADestructiveInspect
Resend the verification code to the patient's email. Use this if the original code expired (5-minute window) or was not received.
Requires the session_id from auth.start — no email needed.
| Name | Required | Description | Default |
|---|---|---|---|
| session_id | Yes | Session ID from auth.start |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context beyond annotations: the 5-minute expiry window and the fact that no email parameter is needed (counter-intuitive for a resend operation). While annotations indicate destructive/non-idempotent status, the description could have clarified what gets destroyed (previous codes invalidated), preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: action (sentence 1), conditions (sentence 2), requirements (sentence 3). Front-loaded with the specific verb 'Resend' and maintains tight focus on functional semantics without redundant elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a single-parameter utility tool. Prerequisites (session_id source), timing constraints (5-minute window), error conditions (expiry/non-receipt), and output schema existence are all addressed. No gaps remain that would hinder correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds value by clarifying that 'no email needed'—addressing the likely assumption that resending requires the recipient address. It also reinforces that session_id comes specifically from auth.start, adding workflow context beyond the schema's basic type description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the action ('Resend the verification code'), the target ('patient's email'), and distinguishes this from siblings like auth.start (which initiates) and auth.verify_otp (which validates). Specific verb + resource combination makes the intent unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use conditions ('if the original code expired (5-minute window) or was not received') and prerequisites ('Requires the session_id from auth.start'). This clearly signals when NOT to use the tool (e.g., for new authentication flows) and establishes dependency on a specific sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
auth.startStart SessionADestructiveInspect
Start a patient session by providing their contact information. Sends a 6-digit verification code to the patient's email.
Returns a session_id (NOT a token). The session_id is used with auth.verify_otp to prove email ownership and get a bearer token.
The code is in the email subject line: 'Chia Health: Your code is XXXXXX'. If you have access to the patient's email (e.g. Gmail MCP), search for this subject.
No authentication required. Call this when the patient is ready to proceed with their medical intake — after browsing medications and checking eligibility.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Patient's email address | ||
| phone | Yes | Patient's phone number (US format) | |
| last_name | No | Patient's last name | |
| first_name | Yes | Patient's first name |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=false and destructiveHint=true; the description adds valuable behavioral context not in annotations, specifically that it 'Sends' an email with a specific subject line format ('Chia Health: Your code is XXXXXX') and clarifies the return semantics (session_id is 'NOT a token'). It explains side effects but does not explicitly address idempotency or retry behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly structured paragraphs logically ordered by: action→return value→email retrieval details→prerequisites/timing. Every sentence adds distinct value (e.g., specific email subject format, clarification that session_id ≠ token, workflow positioning). No redundant or filler content despite the length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the existence of an output schema, the description proactively clarifies critical return value semantics (session_id vs token) and documents the complete two-step authentication flow. It addresses the medical intake context, side effects (email sending), and integration with sibling tools, making it fully complete for an auth initiation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (email, phone, first_name, last_name all documented). The description refers to 'contact information' as a conceptual grouping but does not add specific semantic constraints, formats, or usage guidance beyond what the schema already provides. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool 'Start[s] a patient session' and 'Sends a 6-digit verification code,' providing a specific verb, resource, and action. It clearly distinguishes this from sibling auth.verify_otp by stating the session_id must be used with that tool to 'get a bearer token.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow sequencing: 'Call this when the patient is ready to proceed with their medical intake — after browsing medications and checking eligibility,' directly referencing sibling tool categories (medications.*, eligibility.check). Also clarifies 'No authentication required' and explicitly names auth.verify_otp as the required next step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
auth.verify_otpVerify OTP CodeADestructiveInspect
Verify the 6-digit code sent to the patient's email. Returns a guest-scope bearer token for intake, consent, order, and checkout tools.
Requires the session_id from auth.start — no email needed.
After checkout and payment, call auth.check_payment to upgrade the token to full scope for portal access.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | 6-digit verification code from the email | |
| session_id | Yes | Session ID from auth.start |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructiveHint=true (OTP consumption) and readOnlyHint=false, which the description doesn't contradict. Description adds valuable behavioral context: token scope limitations (guest vs full), authorized tool categories, and the multi-step auth workflow not evident in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose/return value, prerequisites/constraints, and workflow escalation. Front-loaded with the core action, efficiently structured for an agent parsing the auth flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a 2-parameter auth tool with output schema. Covers the full lifecycle (auth.start → verify_otp → checkout → auth.check_payment), explains token scope limitations, and clarifies tool authorization boundaries without needing to duplicate output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both params documented). Description adds constraint semantics ('no email needed' prevents incorrect usage) and reinforces the session_id origin from auth.start, providing workflow context beyond the schema's basic field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb (verify) + resource (OTP code) + scope (patient's email). Distinguishes from siblings by clarifying it returns a 'guest-scope' token (vs full scope from auth.check_payment) and listing specific authorized tools (intake, consent, order, checkout).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit prerequisite ('Requires the session_id from auth.start'), explicit exclusion ('no email needed'), and clear workflow continuation ('After checkout... call auth.check_payment'). Names specific sibling tools for the complete authentication lifecycle.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
checkout.cancelCancel CheckoutADestructiveIdempotentInspect
Cancel an in-progress checkout session. Releases any held inventory and cancels the associated Stripe PaymentIntent if not yet captured. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| checkout_id | Yes | Checkout session ID to cancel | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds significant value beyond annotations: discloses specific side effects (inventory release, Stripe PaymentIntent cancellation with conditional 'if not yet captured'). Does not mention idempotentHint=true from annotations, but the specific business logic details (inventory/PaymentIntent) are more actionable than the boolean hint alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: (1) core action, (2) specific side effects, (3) prerequisites. Front-loaded with the primary verb. Every clause provides unique information not redundant with schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a destructive operation with existing output schema. Covers cancellation scope, resource side effects, and auth requirements. Could reference idempotent behavior (per annotation) or error states (e.g., already completed), but sufficiently complete given the output schema handles return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description mentions 'authentication' (mapping to bearer_token) and 'checkout session' (mapping to checkout_id) but does not add syntax, format details, or semantic context beyond what the schema property descriptions already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Cancel' + resource 'checkout session' with scope qualifier 'in-progress'. Clearly distinguishes from siblings 'checkout.complete' (finalize) and 'checkout.create' (initiate) through the action verb and state qualifier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear state constraint ('in-progress') indicating when to use, and explicit prerequisite ('Requires authentication'). Lacks explicit comparison to alternatives (e.g., 'use checkout.complete to finalize instead'), but the 'in-progress' constraint effectively signals appropriate timing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
checkout.completeComplete CheckoutADestructiveIdempotentInspect
Complete payment using Stripe ACP (Shared Payment Token). Only use this if your platform supports Stripe Agentic Commerce Protocol and can provision an SPT. If your platform does NOT support ACP, use the payment_url from checkout.create instead, then poll checkout.status. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| checkout_id | Yes | Checkout session ID to complete payment for | |
| bearer_token | No | Authentication token for the patient session | |
| shared_payment_token | Yes | Stripe ACP Shared Payment Token (SPT) provisioned by the client platform |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotent/destructive/write properties; the description adds critical behavioral context including the authentication requirement ('Requires authentication') and the specific protocol constraint (ACP/SPT provisioning). Could explicitly state financial capture side effects, but 'Complete payment' combined with destructiveHint=true is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences that are front-loaded with the core action and each earn their place: mechanism, platform requirement, alternative workflow, and auth prerequisite. No redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial operation with destructive implications, the description adequately covers prerequisites (ACP support, SPT provisioning), alternatives, authentication needs, and protocol context. The presence of an output schema absolves it from needing to describe return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description maps 'Shared Payment Token' to the SPT parameter and provides protocol context (Stripe ACP), but does not add syntactic details or examples beyond what the schema already documents for checkout_id or bearer_token.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Complete') with resource ('payment') and mechanism ('Stripe ACP Shared Payment Token'), clearly distinguishing this from sibling tools like checkout.create by explicitly contrasting the SPT approach with the payment_url alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use ('Only use this if your platform supports Stripe Agentic Commerce Protocol'), when-not-to-use ('If your platform does NOT support ACP'), and names the specific alternative workflow ('use the payment_url from checkout.create instead, then poll checkout.status').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
checkout.createCreate CheckoutADestructiveIdempotentInspect
Initiate a checkout session for a medication order. Returns checkout details including line items, total, and payment options.
TWO PAYMENT PATHS are supported:
Stripe ACP (preferred): If your platform supports Stripe Agentic Commerce Protocol, provision a Shared Payment Token (SPT) and call checkout.complete to pay instantly.
Payment link (fallback): If ACP/SPT is not available, present the returned
payment_urlto the patient. This is a Stripe-hosted checkout page where the patient can enter their card and pay directly. After sending the link, call checkout.status to poll for payment completion.
Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes | Order ID to create checkout for | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare safety profile (destructive, idempotent), so the description adds workflow context: it discloses what gets returned (line items, total, payment options, payment_url) and details the two distinct payment flows and their requirements. It does not contradict the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with the core action front-loaded, followed by bold section headers for the two payment paths, and ending with prerequisites. No filler text; every sentence guides the agent through the workflow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and the tool's 100% parameter coverage, the description is complete. It explains the return values conceptually (line items, payment options), maps the workflow to specific sibling tools, and covers authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description mentions 'Requires authentication' which maps to bearer_token, but adds no additional semantic details (e.g., token format, source) beyond what the schema already provides for order_id and bearer_token.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific verb 'Initiate' and resource 'checkout session for a medication order,' clearly defining the scope. It distinguishes itself from siblings (checkout.complete, checkout.status) by explicitly positioning this as the starting point for two distinct payment workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance for two payment paths (Stripe ACP preferred vs. payment link fallback), including the exact next steps for each (call checkout.complete vs. present payment_url and poll checkout.status). Also states the prerequisite 'Requires authentication.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
checkout.statusGet Checkout StatusARead-onlyInspect
Check the payment status of a checkout session. Use this to poll for completion after sending the patient a payment link (the payment_url from checkout.create). When the patient pays via the link, this tool detects the payment, triggers order fulfillment, and returns the confirmation. Poll every 5-10 seconds while waiting. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| checkout_id | Yes | Checkout session ID to check payment status for | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states the tool 'triggers order fulfillment,' which implies a state-changing side effect. This directly contradicts the readOnlyHint: true annotation which indicates the tool is safe and non-modifying. This is a serious inconsistency regarding the tool's safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five tightly constructed sentences with zero waste. Front-loaded with the core purpose, followed by usage context, behavioral details, polling instructions, and auth requirements. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists and parameter schemas are complete, the description adequately covers the polling lifecycle, auth requirements, and connection to sibling tools. However, it lacks explicit mention of possible status values or error states, and the contradiction regarding side effects reduces confidence in the description's completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds workflow context connecting checkout_id to the checkout.create flow and mentions the authentication requirement, though it does not add format details or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Check the payment status') and resource ('checkout session'), and explicitly distinguishes this from checkout.create by referencing the payment_url from that tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance ('poll for completion after sending the patient a payment link'), references the sibling tool checkout.create for context, and specifies polling frequency ('every 5-10 seconds').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
checkout.updateUpdate CheckoutADestructiveIdempotentInspect
Update an existing checkout session. Can modify shipping method, apply promo codes, or update customer details before payment is completed. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| updates | Yes | Updates to apply: shipping method, promo code, or customer details | |
| checkout_id | Yes | Checkout session ID to update | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare idempotent=true and destructive=true, which the description doesn't contradict. Adds valuable behavioral context not in annotations: explicit authentication requirement ('Requires authentication'), lifecycle constraint (pre-payment only), and specific domains that can be modified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: purpose declaration, scope/timing constraints, and authentication prerequisite. Front-loaded with core action, zero redundancy, appropriate density for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage for a mutation tool with nested parameters. Addresses authentication, lifecycle state, and modifiable fields. Could explicitly acknowledge idempotency (noted in annotations) for retry safety, but sufficient given output schema exists and annotations cover safety profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing baseline 3. Description reinforces the 'updates' parameter semantics by listing examples (shipping method, promo codes, customer details) but doesn't add syntax constraints, validation rules, or format details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Update') and resource ('existing checkout session'). Explicitly distinguishes from sibling 'checkout.complete' by specifying 'before payment is completed' and from 'checkout.create' by emphasizing modification of existing sessions rather than creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear temporal context ('before payment is completed') establishing when to use this versus completing the checkout. Lists specific modifiable fields (shipping method, promo codes, customer details) guiding appropriate usage. Lacks explicit comparison to 'checkout.create' or 'checkout.cancel' alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
consent.listGet Required ConsentsARead-onlyInspect
Get the list of all consent documents a patient must accept before ordering medication. Returns consent IDs, titles, summaries, and order of presentation. Required consents include: telehealth informed consent, compounded medication treatment consent, pharmacy authorization, HIPAA notice of privacy practices, and AI-assisted intake disclosure. Each consent must be fetched individually via consent.text and confirmed by the patient before proceeding. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| intake_id | Yes | Intake ID to get required consents for | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, so the description appropriately focuses on adding workflow context rather than safety warnings. It adds valuable behavioral details: the return structure (IDs, titles, summaries, order), specific examples of consent types expected, and the authentication requirement. It does not contradict the read-only annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences front-load the purpose and return value. The enumeration of specific consent types (telehealth, HIPAA, etc.) adds domain context but slightly increases length; however, every clause serves to clarify the scope or workflow. No redundant or tautological language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and 100% input schema coverage, the description provides complete contextual coverage: it explains the business purpose (medication ordering prerequisite), the multi-step workflow (fetch text separately), authentication needs, and what data is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description mentions 'Requires authentication' which loosely references the bearer_token parameter, but does not elaborate on intake_id semantics beyond what the schema already provides. No additional parameter context is necessary given complete schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('list of all consent documents'), and clarifies the scope ('a patient must accept before ordering medication'). It effectively distinguishes from siblings like consent.text by explaining this returns metadata only, not the full text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('before ordering medication') and provides clear workflow guidance: 'Each consent must be fetched individually via consent.text and confirmed by the patient before proceeding.' This directly names the alternative tool (consent.text) and implies when NOT to use this tool (don't use for full text or submission).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
consent.statusGet Consent StatusARead-onlyInspect
Check whether all required consents are complete for a patient intake. Returns status of each consent and whether the patient can proceed to ordering. This is a gate — order.create will reject if consents are incomplete. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| intake_id | Yes | Intake ID to check consent completion for | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While the readOnlyHint annotation confirms this is a safe read operation, the description adds valuable workflow context by identifying this as a 'gate' and explaining the rejection behavior with order.create. It also explicitly states 'Requires authentication,' clarifying security requirements beyond the parameter schema alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The four-sentence structure efficiently delivers the tool's purpose, return value summary, workflow constraint (gate behavior), and security requirement without redundancy. Each sentence advances understanding, with the critical usage guideline positioned prominently to guide agent behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and readOnlyHint annotations, the description appropriately focuses on high-level behavioral context rather than return value details. It successfully covers the essential workflow logic (gate behavior), authentication requirement, and relationship to sibling tools, providing sufficient context for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the parameter definitions already fully document intake_id and bearer_token. The description references 'patient intake' and 'authentication' which align with the parameters but do not add substantial semantic meaning beyond the existing schema descriptions, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Check'), resource ('consents'), and scope ('for a patient intake'). It effectively distinguishes this tool from siblings like consent.list and consent.submit by focusing on status verification, and explicitly references order.create to clarify its specific role in the workflow sequence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'This is a gate — order.create will reject if consents are incomplete,' providing clear guidance on when to invoke this tool (before ordering) and the critical consequence of incomplete consents. It establishes the temporal dependency between consent verification and order creation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
consent.submitSubmit ConsentADestructiveIdempotentInspect
Record a patient's consent confirmation for a specific consent document. The agent must have already presented the full consent text (from doctormcp_get_consent_text) to the patient and received explicit confirmation. Required parameters: intake_id, consent_id, the patient's exact confirmation text (e.g. 'I agree'), consent method ('ai_agent_conversational'), the AI platform name ('chatgpt', 'claude', 'gemini'), and a session/conversation ID for audit trail. Returns a consent record with timestamp, audit trail details, and the list of remaining consents still needed. All consent records are retained for 10 years per HIPAA requirements. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| intake_id | Yes | Intake ID this consent belongs to | |
| timestamp | No | ISO 8601 timestamp of consent confirmation | |
| consent_id | Yes | Consent document ID being confirmed | |
| bearer_token | No | Authentication token for the patient session | |
| agent_platform | No | AI platform name: 'chatgpt', 'claude', or 'gemini' | |
| consent_method | No | Method of consent collection | ai_agent_conversational |
| agent_session_id | No | Unique session/conversation ID for audit trail | |
| patient_confirmation | Yes | Patient's exact confirmation text (e.g. 'I agree') |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (destructiveHint: true, idempotentHint: true), the description adds critical compliance context: 10-year HIPAA retention policy, explicit authentication requirement, and specific return value details (timestamp, audit trail, remaining consents). It contextualizes the destructive nature as permanent legal record-keeping.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Dense but appropriately so for a high-stakes medical compliance tool. Information is front-loaded with the core action, followed by prerequisites, parameter specifics, return values, retention policy, and auth requirements. No extraneous content, though the parameter list is lengthy due to the 8-parameter complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage for a complex HIPAA-regulated operation. Includes prerequisites (presentation of text), return value summary (despite presence of output schema), legal retention requirements, audit trail specifications, and authentication needs. Sufficient for an agent to handle the consent workflow correctly and compliantly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by grouping parameters logically into 'Required parameters' (clarifying the 3 core fields) versus audit trail fields, and providing concrete examples ('I agree', platform names) that reinforce the schema constraints, though the schema already contains these details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Record a patient's consent confirmation') and the resource (consent document). It distinguishes itself from sibling 'consent.text' by referencing 'doctormcp_get_consent_text' as a prerequisite step, clarifying this is the submission endpoint rather than the retrieval endpoint.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent prerequisite guidance: explicitly states the agent must have already presented consent text from 'doctormcp_get_consent_text' (consent.text) and received explicit confirmation. It defines the exact workflow sequence, preventing misuse by clarifying when this tool should be invoked in the patient interaction flow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
consent.textGet Consent TextARead-onlyInspect
Fetch the full text of a specific consent document for patient review. Returns the complete consent document split into titled sections that the agent MUST present to the patient verbatim in the conversation — do not summarize or paraphrase. Includes: consent version number, effective date, section headings and body text, a confirmation prompt the patient should agree to, and withdrawal instructions. Available consent types: telehealth informed consent, compounded medication treatment consent, pharmacy authorization, HIPAA notice of privacy practices, and AI-assisted intake disclosure. The patient must explicitly confirm each consent before the agent can call consent.submit. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| consent_id | Yes | Consent document ID from doctormcp_get_required_consents | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite readOnlyHint annotation, description adds critical behavioral constraints: MUST present verbatim without summarizing, lists 5 specific consent types available, details content structure (sections, version, withdrawal instructions), and establishes the confirmation workflow. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense with zero waste. Front-loaded with core action, followed by presentation requirements, content details, available types, workflow prerequisite, and auth requirement. Every sentence provides essential context for a high-stakes healthcare tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness for a complex healthcare consent tool. Despite output schema existing, description outlines return content structure and critical presentation constraints. Covers consent types, version info, withdrawal instructions, and explicit confirmation requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3. Description adds workflow context that consent_id identifies a specific document from the required consents list, and emphasizes the authentication requirement generally. Slightly elevated above baseline due to workflow integration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Fetch' with resource 'consent document' and context 'for patient review'. Clearly distinguishes from siblings by referencing consent.submit as the required follow-up action and implying consent.list (or equivalent) as the source of consent_id.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: 'The patient must explicitly confirm each consent before the agent can call consent.submit'. Also notes authentication requirement and implies prerequisite call to get required consents. Clear when-to-use vs alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eligibility.checkCheck EligibilityARead-onlyInspect
Pre-screen a patient's basic eligibility for telehealth prescription services. Checks: age (must be 18+), state (must be where our providers are licensed), BMI (must be 20+), pregnancy status (must not be pregnant, planning pregnancy, or breastfeeding), and medical conditions (medullary thyroid carcinoma or MEN2 syndrome are disqualifying). Returns eligibility status, list of available medications, and any disqualifying reasons.
| Name | Required | Description | Default |
|---|---|---|---|
| age | Yes | Patient's age in years (must be 18+) | |
| bmi | No | Patient's Body Mass Index (must be 20+) | |
| sex | No | Patient's biological sex ('male' or 'female') | |
| state | Yes | US state abbreviation where the patient resides | |
| client_ip | No | Client IP address for rate limiting | |
| conditions | No | List of diagnosed medical conditions | |
| pregnancy_status | No | Pregnancy status: 'not_pregnant', 'pregnant', 'planning', or 'breastfeeding' |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true (safe read) and openWorldHint=true, the description adds crucial business logic: specific disqualifying conditions (medullary thyroid carcinoma, MEN2), exact thresholds (18+ years, 20+ BMI), and return value structure (status, medications list, reasons). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three dense sentences with zero waste. Front-loaded with purpose ('Pre-screen...'), followed by specific check criteria, then return values. Every clause provides actionable information about business logic or data requirements.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 7 parameters with full schema coverage, existing output schema, and readOnly annotation, the description is complete. It discloses all critical business constraints (pregnancy exclusions, BMI thresholds) that an agent needs to understand before invoking the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds semantic value by mapping parameters to specific validation rules (e.g., explaining that conditions parameter checks for thyroid carcinoma/MEN2, that state checks licensing coverage). This context helps the agent understand why each parameter matters for eligibility.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Pre-screen') and clear resource ('patient's basic eligibility for telehealth prescription services'). It clearly distinguishes from siblings like auth.*, checkout.*, and medications.* by focusing specifically on eligibility validation criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The term 'Pre-screen' implies this is an entry-point tool to call before other operations, but there is no explicit 'when to use' guidance contrasting it with alternatives like medications.availability or intake.* tools. No prerequisites or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
intake.questionsGet Intake QuestionsARead-onlyInspect
Get the full medical intake questionnaire a patient needs to complete before a provider can evaluate them for a prescription. Returns two phases of questions: (1) pre_checkout — screening questions including demographics, pregnancy status, weight/BMI, lifestyle, GLP-1 history, and treatment consent; (2) post_checkout — detailed medical history including allergies, MTC/MEN2 history, diagnosed conditions, blood pressure, heart rate, current medications, and surgery history. Both phases must be completed. The questionnaire is reviewed by a licensed US healthcare provider who makes all prescribing decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| client_ip | No | Client IP address for rate limiting | |
| medication | Yes | Medication name to get intake questions for |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure beyond annotations: details the two-phase structure (pre_checkout vs post_checkout), lists specific medical domains covered (GLP-1 history, MTC/MEN2, etc.), notes that 'both phases must be completed,' and discloses the licensed US healthcare provider review process. Annotations only indicate read-only and external-world; description provides rich medical workflow context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: (1) purpose/return value, (2) detailed phase breakdown, (3) completion requirements and medical oversight. No redundancy. Front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a medical workflow tool with output schema available. Covers the critical safety/compliance aspect (licensed provider review), explains the questionnaire structure sufficiently for an agent to understand the return value without duplicating output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Medication name to get intake questions for', 'Client IP address for rate limiting'). Description mentions GLP-1 history and prescription evaluation, providing slight contextual motivation for the medication parameter, but does not add significant semantic detail beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' + resource 'medical intake questionnaire' + context 'before a provider can evaluate them for a prescription'. Clearly distinguishes from sibling 'intake.submit' (submission) and 'provider.questions' (provider-facing communication) by specifying this retrieves the patient intake form.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies workflow sequence ('needs to complete before a provider can evaluate') and distinguishes from submission tools, but does not explicitly state 'call this before intake.submit' or mention that 'intake.status' checks completion progress. Usage is clear from context but lacks explicit when/when-not guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
intake.statusGet Intake StatusARead-onlyInspect
Check the current status of a previously submitted intake questionnaire. Returns whether the intake is under review, approved, or denied by a licensed healthcare provider. Use this to poll for provider review completion before proceeding to order placement. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| intake_id | Yes | Intake ID returned from doctormcp_submit_intake | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds valuable behavioral context including specific return states (under review, approved, denied), healthcare provider involvement, and polling pattern. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste. Front-loaded with core action ('Check the current status'), followed by return values, usage guidance, and prerequisites. Each sentence serves distinct purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists, description appropriately focuses on domain semantics (provider review states) and workflow integration rather than structural details. Covers authentication needs and polling behavior. Could mention error cases (invalid intake_id) but sufficient for complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions (intake_id references the submit function, bearer_token explains auth). Description mentions 'Requires authentication' which aligns with bearer_token but doesn't add syntax details beyond the schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resource 'intake questionnaire status' and defines scope (under review/approved/denied). The phrase 'previously submitted' effectively distinguishes it from sibling intake.submit and intake.questions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('poll for provider review completion') and workflow context ('before proceeding to order placement'). Notes authentication requirement. Could explicitly name alternative tools, but the 'previously submitted' qualifier provides clear differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
intake.submitSubmit IntakeADestructiveInspect
Submit a completed medical intake questionnaire for provider review. All fields from intake.questions must be completed. Returns an intake ID and estimated provider review time. The intake is reviewed by a licensed US healthcare provider who makes all prescribing decisions. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| bearer_token | No | Authentication token for the patient session | |
| patient_name | Yes | Patient's full legal name | |
| patient_email | Yes | Patient's email address | |
| intake_answers | Yes | Completed intake questionnaire answers from doctormcp_get_intake_questions |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructive/non-readOnly operation; description adds critical behavioral context beyond annotations: specific return values ('intake ID and estimated provider review time'), human review process ('reviewed by a licensed US healthcare provider'), and prescribing authority context. Does not clarify destructive nature (irreversibility) but adds healthcare-specific workflow transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences with zero waste: purpose → completion requirements → return values → business process → authentication. Every sentence adds distinct value (action, validation, output, human review loop, security). Front-loaded with core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a healthcare-critical destructive operation. Covers prescribing authority, provider licensing, return values, authentication, and prerequisites. Despite having output schema (per context signals), description appropriately summarizes key return values and adds regulatory/healthcare context essential for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline of 3. Description adds validation semantics ('All fields...must be completed' for intake_answers) and security context ('Requires authentication' contextualizing bearer_token). References sibling tool intake.questions to clarify intake_answers content origin.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Submit'), resource ('completed medical intake questionnaire'), and scope ('for provider review'). It clearly distinguishes from sibling intake.questions by referencing it as the prerequisite source for fields, establishing the workflow sequence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies clear workflow context ('All fields from intake.questions must be completed') and prerequisites ('Requires authentication'). However, it could be more explicit about when NOT to use (e.g., duplicate submissions) despite idempotentHint=false in annotations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
medications.availabilityCheck Medication AvailabilityARead-onlyInspect
Check if a specific medication is available for shipping to the patient's state. Some compounded medications have state-specific restrictions based on pharmacy licensing. Returns availability status and reason if unavailable.
| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes | US state abbreviation (e.g. 'CA', 'NY', 'TX') | |
| client_ip | No | Client IP address for rate limiting | |
| medication | Yes | Medication name to check availability for |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable context beyond annotations: explains the business logic (restrictions 'based on pharmacy licensing') and return behavior ('Returns availability status and reason if unavailable'). Aligns with readOnlyHint=true by using 'Check' verb. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: first establishes core function, second explains business constraint, third describes return value. Front-loaded with the primary action. No redundant phrasing or repetition of structured schema data.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a read-only availability check: covers business purpose, restriction rationale, and output format. Given 100% schema coverage, existing output schema, and safety annotations, this is sufficient. Minor gap: could explicitly note read-only nature (though annotations cover this) or required parameter combination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline applies. Description provides contextual framing ('patient's state' for state param) but doesn't add syntax, format constraints, or validation rules beyond the schema. Schema already documents US state abbreviation format and medication name requirements effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specific purpose: 'Check if a specific medication is available for shipping to the patient's state' provides clear verb (Check), resource (medication availability), and scope (shipping to state). Distinguishes from sibling tools like medications.list (enumeration) and medications.pricing (cost) by focusing on geographic eligibility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context by mentioning 'compounded medications' and 'state-specific restrictions,' suggesting when to use it. However, lacks explicit guidance on when NOT to use this versus medications.details or eligibility.check, and doesn't mention prerequisite steps like needing patient consent or prior authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
medications.categoriesList CategoriesARead-onlyInspect
List all medication categories available through the telehealth platform: Weight Loss (GLP-1 medications), Peptide Therapy (sermorelin, growth hormone peptides), Anti-Aging & Longevity (NAD+, glutathione), and other treatment categories. Each category includes a description and count of available medications.
| Name | Required | Description | Default |
|---|---|---|---|
| client_ip | No | Client IP address for rate limiting |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true (safe read operation). Description adds valuable behavioral context not in annotations: specific examples of category types (GLP-1, NAD+, etc.) and return structure details (description and count per category). openWorldHint=true aligns with 'and other treatment categories' phrasing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently combines action, scope, examples, and return structure. Examples (Weight Loss, Peptide Therapy) earn their place by clarifying domain-specific category types. Slightly dense but information-rich.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with 100% schema coverage, existing annotations, and an output schema, the description is complete. It adequately describes the domain context (telehealth platform) and return value composition without needing to duplicate output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (client_ip described as 'Client IP address for rate limiting'). Description provides no additional parameter semantics, but baseline 3 is appropriate since schema fully documents the single optional parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('List') + resource ('medication categories') + scope ('available through the telehealth platform'). Distinguishes from sibling tools like medications.list by specifying it returns categories (with examples like Weight Loss, Peptide Therapy) rather than individual medications.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context by describing what the tool returns (categories with descriptions and counts), which helps distinguish it from medications.details or medications.list. However, lacks explicit 'when to use' guidance or named alternatives for when the user needs specific medications rather than categories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
medications.detailsGet Medication DetailsARead-onlyInspect
Get detailed information about a specific medication including: all available dosage strengths and titration schedules, available forms (injectable vials, pre-filled syringes, oral dissolving tablets, sublingual drops), all active plan options with pricing for each, what's included (provider consultation, medication, shipping, ongoing support), contraindications, and common side effects. Supports queries by medication name (e.g. 'semaglutide', 'tirzepatide', 'sermorelin', 'NAD+', 'glutathione') or by category (e.g. 'weight loss', 'peptides', 'anti-aging'). Use this to look up exact plan durations and pricing.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by category (e.g. 'weight loss', 'peptides', 'anti-aging') | |
| client_ip | No | Client IP address for rate limiting | |
| medication | Yes | Medication name (e.g. 'semaglutide', 'tirzepatide', 'sermorelin', 'NAD+', 'glutathione') |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish readOnly safety (readOnlyHint: true) and extensibility (openWorldHint: true). Description adds valuable behavioral context by enumerating specific returnable data (injection forms, plan inclusions, consultation details) without contradicting annotations. Does not mention rate limiting (implied by client_ip parameter) but coverage is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Dense but efficient three-sentence structure. Front-loads 'Get detailed information...' then catalogs specific attributes. The enumerated list is long but every item (e.g., 'titration schedules', 'contraindications') adds distinct value for distinguishing tool capabilities.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness for a complex medication domain with rich sibling toolset. Details returned data comprehensively (clinical, logistical, financial). Presence of output schema means description need not detail return structure, yet it provides sufficient exemplars to inform usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (medication, category, client_ip fully documented). Description reinforces parameter semantics by listing example values that match schema descriptions ('semaglutide', 'weight loss'). Baseline 3 is appropriate as schema carries definition weight.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with detailed resource enumeration (dosage strengths, titration schedules, forms, pricing, contraindications, side effects). Distinguishes from siblings like medications.list (basic listing) and medications.pricing (likely price-only) by emphasizing comprehensive clinical and commercial details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage directive: 'Use this to look up exact plan durations and pricing.' Lacks explicit 'when-not-to-use' or named sibling comparisons (e.g., medications.pricing), but the comprehensive scope description implicitly guides selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
medications.listList MedicationsARead-onlyInspect
List all available prescription medications including GLP-1 weight loss drugs (compounded semaglutide, compounded tirzepatide), peptide therapies (sermorelin for growth hormone support, NAD+ for cellular energy and anti-aging, glutathione for antioxidant support), and other compounded wellness treatments. Returns medication names, categories, available forms (injectable, oral tablet, sublingual drops, nasal spray), and starting prices. All medications are compounded by US-licensed 503A pharmacies and require evaluation by a licensed US healthcare provider before prescribing.
| Name | Required | Description | Default |
|---|---|---|---|
| client_ip | No | Client IP address for rate limiting |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds significant regulatory context beyond annotations: notes 503A pharmacy compounding requirement and mandatory licensed provider evaluation before prescribing. Annotations cover read-only safety (readOnlyHint=true) and dynamic inventory (openWorldHint=true), while description adds compliance constraints relevant to healthcare AI agents.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: medication scope with examples, return value specification, and regulatory constraints. Front-loaded with specific drug categories. Slightly dense first sentence but every clause adds distinguishing detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a catalog tool with existing output schema. Covers medication taxonomy, available forms, pricing indicators, and crucial healthcare compliance context (503A pharmacies, prescribing requirements). Completeness accounts for medical domain complexity without needing to duplicate structured output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the single client_ip parameter ('Client IP address for rate limiting'). Description adds no parameter-specific guidance, but with complete schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' with clear resource scope 'all available prescription medications'. Distinguishes from siblings like medications.details/pricing/availability by providing broad catalog examples (GLP-1s, peptides) and stating it returns comprehensive fields (names, categories, forms, prices).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through comprehensive catalog description, but lacks explicit when-to-use guidance versus alternatives like medications.details (specific medication lookup) or medications.pricing (price-focused). No explicit exclusions or selection criteria provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
medications.pricingGet PricingARead-onlyInspect
Get detailed pricing for a specific medication, form, and plan duration. Returns price breakdown including medication cost, provider consultation fee, shipping, and any applicable discounts for longer plans. Plan durations vary by medication — use medications.details first to see available plan_months values. Supports semaglutide, tirzepatide, sermorelin, NAD+, glutathione and all other available medications.
| Name | Required | Description | Default |
|---|---|---|---|
| form | Yes | Medication form: 'injectable', 'tablet', or 'drops' | |
| client_ip | No | Client IP address for rate limiting | |
| medication | Yes | Medication name | |
| plan_months | Yes | Plan duration in months (1, 4, or 6) |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare readOnlyHint=true, the description adds valuable output transparency by listing specific price breakdown components (medication cost, provider consultation fee, shipping, discounts). It also discloses behavioral constraints that plan durations vary by medication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently front-load purpose and return structure. The final medication list grounds the openWorldHint parameter with concrete valid examples, earning its place despite length, though the list could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 4-parameter tool with output schema: covers workflow prerequisites, previews return value structure without duplicating the output schema, and provides valid medication examples. Minor gap in not mentioning the optional client_ip parameter's role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, the description adds critical semantic context: it clarifies that valid plan_months values are medication-dependent and must be retrieved from medications.details first, which the schema alone (listing only examples 1, 4, 6) does not communicate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action 'Get detailed pricing' and identifies the exact resource (medication pricing by form and duration). It distinguishes from siblings like medications.details by clarifying this returns financial breakdowns versus metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly prescribes the prerequisite workflow: 'use medications.details first to see available plan_months values.' This clearly defines when to use this tool (after determining valid plan durations) versus the sibling tool, preventing invalid calls.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
order.createCreate OrderADestructiveInspect
Create a new prescription medication order. The patient must have completed intake questionnaire and consent before ordering. Required: medication name, selected form (injectable, tablet, drops), plan duration (1, 4, or 6 months), shipping address. The order is reviewed by a licensed US healthcare provider who makes the final prescribing decision. If approved, medication is compounded at a US-licensed 503A pharmacy and shipped directly to the patient. Returns order ID, estimated provider review time, and expected delivery window. Payment is processed via Stripe Agentic Commerce Protocol (ACP). Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| form | Yes | Medication form: 'injectable', 'tablet', or 'drops' | |
| intake_id | Yes | Approved intake ID from doctormcp_submit_intake | |
| medication | Yes | Medication name to order | |
| plan_months | Yes | Plan duration in months (1, 4, or 6) | |
| bearer_token | No | Authentication token for the patient session | |
| shipping_address | Yes | Shipping address with keys: line1, city, state, zip, and optional line2 |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure beyond annotations: explains the human-in-the-loop review ('licensed US healthcare provider who makes the final prescribing decision'), fulfillment process ('compounded at a US-licensed 503A pharmacy'), payment protocol ('Stripe Agentic Commerce Protocol'), and return values. Annotations indicate destructive/write behavior; the description explains exactly what that entails (charges, pharmacy fulfillment, non-guaranteed approval).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Six sentences with zero waste. Well-structured flow: purpose → prerequisites → parameters → process → outputs/auth. Every sentence earns its place; no redundant fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of prescription medication ordering, the description is remarkably complete. It covers medical prerequisites, regulatory context (503A pharmacy), payment processing, authentication requirements, and expected outputs. With 100% schema coverage and an output schema present, this level of domain context is exemplary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description lists the required parameters and their valid values (forms, plan months), but largely repeats constraints already present in the schema property descriptions. It adds context linking intake_id to the prerequisite workflow, though the schema already references 'doctormcp_submit_intake'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource pair ('Create a new prescription medication order') that clearly distinguishes this from sibling tools like order.status (read-only), order.documents (file management), and checkout.create (payment initiation without prescription).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisites ('patient must have completed intake questionnaire and consent before ordering') and references the specific prerequisite tool ('doctormcp_submit_intake' aka intake.submit). It also distinguishes the medical review workflow from simple e-commerce by describing the provider approval gate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
order.documentsGet Required DocumentsARead-onlyInspect
Get the list of documents a patient needs to upload for their order. Returns required documents (photo ID, selfie for verification) with upload status and accepted file formats. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes | Order ID to check required documents for | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description aligns with readOnlyHint: true by using 'Get' and does not contradict annotations. It adds valuable behavioral context beyond annotations: it discloses specific return content (upload status, accepted file formats) and gives concrete examples of required documents (photo ID, selfie), helping the agent understand what to expect from the output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description contains three sentences that are all necessary: the first establishes purpose, the second details return values with specific examples, and the third states authentication requirements. There is no redundant or wasted language; information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and 100% parameter coverage, the description provides adequate context by enumerating specific document types and mentioning authentication. It could be improved by explicitly relating this tool to order.upload in the workflow, but it sufficiently covers the essential information for a read-only list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'Requires authentication,' which loosely maps to the bearer_token parameter, but adds no specific semantic details about parameter format, validation rules, or the relationship between order_id and the patient's session beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('list of documents') with clear scope ('for their order'). It distinguishes from sibling tool order.upload by clarifying this retrieves requirements rather than performing uploads, and specifies concrete document types (photo ID, selfie) that help the agent understand the domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states 'Requires authentication,' indicating a prerequisite, but lacks explicit when-to-use guidance (e.g., 'Use this before order.upload to determine what files are needed') or explicit exclusions. Usage must be inferred from the workflow implied by the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
order.statusGet Order StatusARead-onlyInspect
Get the current status of a medication order. Returns status (pending_review, provider_reviewing, approved, needs_info, denied, compounding, shipped, delivered), tracking information, and delivery estimate. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes | Order ID from doctormcp_create_order | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond the readOnlyHint annotation by enumerating all possible status values (pending_review through delivered) and disclosing that tracking information and delivery estimates are included in the response. It also notes the authentication requirement.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose front-loaded first, followed by specific return values (including the helpful enum list), and auth requirement. Every clause earns its place and the length is appropriate for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description appropriately focuses on enumerating the status values and key return fields rather than exhaustive return documentation. It covers the essential behavioral contract for a status-retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds domain context ('medication order') and confirms the authentication requirement, but does not significantly expand on parameter formats or validation rules beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('current status of a medication order') that clearly distinguishes this from sibling tools like order.create, order.documents, or order.upload. The scope is precisely defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the implied usage is clear (retrieve status of an order), there is no explicit guidance on when to use this versus checkout.status or prerequisites like 'use after order.create'. No alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
order.uploadUpload DocumentADestructiveIdempotentInspect
Upload a verification document for a medication order. Accepts photo ID and selfie as base64-encoded files. Supported formats: PDF, JPEG, PNG. Maximum size: 10MB. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes | Order ID to upload document for | |
| file_name | Yes | Original filename with extension (e.g. 'license.jpg') | |
| file_base64 | Yes | Base64-encoded file content (PDF, JPEG, or PNG, max 10MB) | |
| bearer_token | No | Authentication token for the patient session | |
| document_type | Yes | Document type: 'photo_id' or 'selfie' |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical operational constraints beyond annotations: file format restrictions (PDF, JPEG, PNG), size limits (10MB), and authentication requirements. Annotations already cover idempotent/destructive hints, so description appropriately focuses on domain-specific constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly front-loaded and dense. Each sentence delivers essential constraints (action, accepted types, encoding, formats, size, auth) with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for a destructive file upload operation: covers format validation, size limits, and auth requirements. Relies appropriately on output schema for return values and annotations for idempotency/destructive flags. Minor gap in not explaining the destructive behavior (overwrites vs. appends).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. Description consolidates schema information (mapping 'photo ID and selfie' to document_type, 'base64-encoded' to file_base64) but adds minimal semantic depth beyond what parameter descriptions already provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states the exact action ('Upload'), resource ('verification document'), and scope ('for a medication order'). Clearly distinguishes from sibling order.documents (likely retrieval) by specifying this is for uploading photo ID and selfie specifically.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit guidance by restricting document types to 'photo ID and selfie,' but lacks explicit workflow context (e.g., when to call relative to order.create, whether this replaces existing uploads, or when to use order.documents instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
portal.care_planGet Care PlanARead-onlyInspect
Get the patient's current care plan including: current medication, current dosage, titration schedule, next dose adjustment date, upcoming refill date, provider notes, and weight progress summary. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| patient_id | Yes | Patient ID | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses specific return contents (current medication, dosage, titration schedule, etc.) beyond what the readOnly annotation provides. Notes authentication requirement, but omits error handling behavior such as what returns when a patient has no active care plan.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence listing seven specific data points plus auth note. Every element serves to define scope or prerequisites with no redundant phrasing or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a retrieval tool with an existing output schema. Covers authentication, action, and return contents comprehensively. Could be enhanced with error state description, but sufficient given the simple flat parameter structure and read-only nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both patient_id and bearer_token. Description reinforces the authentication requirement aligning with bearer_token purpose, but adds no syntax details, format constraints, or examples beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States 'Get the patient's current care plan' with specific verb and resource. Distinguishes from siblings like portal.log_weight (write operation) and medications.list (catalog lookup) by enumerating patient-specific treatment details including titration schedules and weight progress.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'Requires authentication' establishing a clear prerequisite for invocation. However, lacks explicit guidance on when to use versus medications.details or intake.status, though the enumerated return fields implicitly define the scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
portal.log_side_effectsLog Side EffectsADestructiveInspect
Log side effects a patient is experiencing. If severity is 'severe', the case is auto-flagged for immediate provider review and returns urgent guidance. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| effects | Yes | List of side effects being experienced (e.g. ['nausea', 'headache']) | |
| severity | Yes | Severity level: 'mild', 'moderate', or 'severe' | |
| patient_id | Yes | Patient ID | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (destructiveHint=true, readOnlyHint=false), the description adds critical behavioral context: the auto-flagging mechanism for severe cases and the return of 'urgent guidance.' It also explicitly states the authentication requirement, which complements the bearer_token parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently cover: (1) core purpose, (2) critical behavioral trigger (severe flagging), and (3) prerequisites (authentication). Every sentence earns its place with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (not shown but indicated in context signals), the description appropriately hints at return behavior ('returns urgent guidance') without duplicating the output specification. It covers the essential medical workflow (escalation for severe cases) but could briefly mention data retention or visibility scope for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all parameters including the severity enum values. The description references the 'severe' value but does not add syntax, format constraints, or semantic details beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Log') and resource ('side effects') that clearly defines the tool's function. It distinguishes itself from sibling logging tools like 'portal.log_weight' by specifying the medical domain (side effects vs. weight).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by explaining the severity escalation workflow ('If severity is severe...'), which hints at when to use this tool (when patients experience side effects). However, it lacks explicit guidance on when to use this versus contacting emergency services or using 'portal.support' for non-medical issues.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
portal.log_weightLog WeightADestructiveInspect
Log a patient's weight for tracking progress on their treatment plan. Requires patient_id, weight in pounds, and date (ISO 8601). Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Date of weight measurement in ISO 8601 format (YYYY-MM-DD) | |
| patient_id | Yes | Patient ID | |
| weight_lbs | Yes | Weight in pounds | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare destructiveHint=true and readOnlyHint=false. Description adds 'Requires authentication' (not in annotations) and business context about treatment plans. However, fails to explain what 'destructive' means for weight logging (overwrites? permanent deletion risk?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. Front-loaded with purpose statement followed by requirements. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 4-parameter tool with 100% schema coverage, existing output schema, and full annotations. Covers purpose, required fields, and auth. Does not need to describe return values given output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description reinforces units (pounds) and date format (ISO 8601) already specified in schema, and implies bearer_token via 'Requires authentication'. Adds minimal semantic value beyond structured schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Log' with clear resource 'patient's weight' and contextual domain 'tracking progress on their treatment plan'. Distinct from sibling portal.log_side_effects (different metric) and portal.message (communication vs. clinical data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('for tracking progress on their treatment plan') but lacks explicit when/when-not guidance or comparison to alternatives like portal.log_side_effects. Does not mention prerequisites beyond authentication.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
portal.messageMessage ProviderADestructiveInspect
Send a message to the patient's healthcare provider. Returns sent confirmation and estimated response time. Urgent messages (containing keywords like 'emergency', 'chest pain', 'difficulty breathing') are flagged for priority response. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | Message text to send to the healthcare provider | |
| patient_id | Yes | Patient ID | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and idempotentHint=false; the description adds valuable behavioral specifics not in structured data: automatic urgent-message flagging logic and the return of estimated response times. No contradiction with annotations, though it could explicitly mention message irreversibility given the destructive hint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four tightly-constructed sentences each serving distinct purposes: (1) core action, (2) return values, (3) urgent-message behavior, (4) prerequisites. No redundant phrases or tautologies; information density is high with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a 3-parameter tool with 100% schema coverage and existing output schema. Covers functional behavior, return expectations, special logic (urgent detection), and auth prerequisites. Could explicitly address the irreversible nature implied by destructiveHint=true, but annotations adequately cover the safety profile.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters (patient_id, message, bearer_token). The description adds minimal semantic value beyond the schema, only implicitly referencing authentication ('Requires authentication') which the schema already covers explicitly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with a specific verb ('Send') and clear resource ('message to the patient's healthcare provider'), precisely defining the scope. It effectively distinguishes from siblings like 'portal.support' (general support) and 'intake.questions' (intake flow) by specifying the patient-to-provider clinical messaging context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides specific operational guidance regarding urgent keyword detection ('emergency', 'chest pain') that triggers priority response, and explicitly states the authentication requirement. Lacks explicit differentiation from 'portal.support' (likely technical vs. clinical), but the urgent-message guidance is highly actionable usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
portal.refillRequest RefillADestructiveInspect
Request a medication refill for the patient's current prescription. Creates a refill order that will be reviewed by a provider within 24-48 hours. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| patient_id | Yes | Patient ID | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Aligns with annotations (destructiveHint: true matches 'Creates a refill order'). Adds valuable behavioral context not in annotations: 24-48 hour review timeline and authentication requirement. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each earning its place: action definition, side-effect/timeline disclosure, and prerequisite warning. Front-loaded with core purpose. No redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given tool complexity. Has output schema (so return values need not be described) and annotations cover safety profile. Description covers workflow, timeline, and auth requirements sufficiently for invocation decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Patient ID' and 'Authentication token'), establishing baseline of 3. Description mentions 'Requires authentication' which reinforces bearer_token purpose, but adds minimal semantic detail beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Request') + resource ('medication refill') + scope ('patient's current prescription'). Clearly distinguishes from sibling 'order.create' (general orders) and 'medications.list' (read-only listing) by specifying this creates a refill for existing prescriptions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'current prescription' (suggesting prerequisite of active prescription), but lacks explicit when-to-use guidance or named alternatives. Does not specify what to use for new prescriptions versus refills.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
portal.supportContact SupportADestructiveInspect
Contact customer support with a question or issue. Creates a support ticket and returns the ticket ID and estimated response time. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | Detailed description of the question or issue | |
| subject | Yes | Support ticket subject line | |
| patient_id | Yes | Patient ID | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond the annotations: it specifies the return values ('ticket ID and estimated response time') which complements the existing output schema, and explicitly notes the authentication requirement. It aligns with annotations (readOnlyHint=false matches 'Creates').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences: purpose statement, behavioral/output details, and auth requirement. Every sentence earns its place with no redundancy or filler. Information is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema, the description appropriately summarizes the return values rather than detailing them. It covers the destructive/mutative nature implicitly via 'Creates' and explicitly via annotations. Could benefit from noting the non-idempotent nature (idempotentHint=false) implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description maps 'question or issue' to the subject/message parameters but doesn't add semantic details like format constraints, examples, or relationships between parameters (e.g., how bearer_token relates to patient_id).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Contact', 'Creates') and clearly identifies the resource (support ticket). It distinguishes effectively from siblings like 'portal.message' or 'provider.questions' by specifying 'customer support' as the target.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description clearly states what the tool does, it lacks explicit guidance on when to use this versus similar communication tools like 'portal.message' or 'provider.questions'. It mentions 'Requires authentication' but doesn't clarify if this refers to the bearer_token parameter or external auth requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
provider.questionsGet Provider QuestionsARead-onlyInspect
Get follow-up questions from the healthcare provider for a specific order. The provider may request additional information before making a prescribing decision. Returns the questions if the order status is 'needs_info', or a message that no questions are pending. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| order_id | Yes | Order ID to get provider questions for | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true; description adds valuable behavioral details: conditional return values (questions vs 'no questions pending' message), business logic (prescribing decision gate), and authentication requirements. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste: (1) purpose, (2) business context, (3) conditional behavior, (4) auth requirement. Front-loaded with action and resource. No redundancy with structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists and describes return format, description appropriately focuses on conditional logic and business purpose rather than return structure. Covers auth, status precondition, and provider intent. Minor gap: could explicitly reference 'provider.respond' for answering the retrieved questions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'Order ID to get provider questions for' and 'Authentication token' descriptions. Description adds no parameter-specific semantics beyond the schema, which is acceptable given complete schema documentation (baseline 3).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with verb 'Get' + resource 'follow-up questions from the healthcare provider' + scope 'for a specific order'. Clearly distinguishes from sibling 'intake.questions' by specifying 'healthcare provider' (clinical) vs intake (patient), and from 'provider.respond' (this retrieves, that submits).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context with condition 'if the order status is needs_info' implying when to invoke. Explains the business trigger (provider needs info before prescribing). Missing explicit mention of sibling 'provider.respond' as the follow-up action, but workflow is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
provider.respondSubmit Provider ResponseADestructiveIdempotentInspect
Submit answers to provider follow-up questions for a specific order. The responses are sent to the provider for review. Returns confirmation and updated order status. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| answers | Yes | Answers to the provider's follow-up questions keyed by question ID | |
| order_id | Yes | Order ID to submit responses for | |
| bearer_token | No | Authentication token for the patient session |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable workflow context beyond annotations: it clarifies responses are 'sent to the provider for review' and mentions return values (confirmation and updated order status). It also explicitly states authentication requirements. However, it does not clarify what makes the operation destructive (as indicated by destructiveHint: true), such as whether answers become immutable after submission.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of four efficient sentences with no redundancy. It is front-loaded with the core action ('Submit answers...'), followed by workflow details, return values, and authentication requirements. Each sentence serves a distinct purpose without repetition of schema or annotation data.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and comprehensive annotations (destructive, idempotent, read-only hints), the description provides sufficient context by covering the submission workflow and return behavior. It appropriately omits detailed return value explanations (covered by output schema) but could strengthen completeness by addressing the destructive nature of the submission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema adequately documents all parameters (order_id, answers object structure, bearer_token). The description does not add parameter-specific semantics (e.g., format examples for question ID keys or token requirements) beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (submit answers), the target resource (provider follow-up questions), and scope (for a specific order). It effectively distinguishes from sibling tool provider.questions by emphasizing the answer submission aspect versus question retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by referencing 'follow-up questions,' suggesting it should be used when questions exist. However, it lacks explicit guidance on when to use this versus alternatives (e.g., portal.message for general communication) and does not state prerequisites like 'use after provider.questions' or conditions when submission is inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.